Nov 24 00:10:22.930127 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 00:10:22.930168 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:10:22.930188 kernel: BIOS-provided physical RAM map: Nov 24 00:10:22.930200 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 24 00:10:22.930211 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 24 00:10:22.930223 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 24 00:10:22.930237 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 24 00:10:22.930250 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 24 00:10:22.930263 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 24 00:10:22.930275 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 24 00:10:22.930287 kernel: NX (Execute Disable) protection: active Nov 24 00:10:22.930303 kernel: APIC: Static calls initialized Nov 24 00:10:22.930315 kernel: e820: update [mem 0x768c0018-0x768c8e57] usable ==> usable Nov 24 00:10:22.930328 kernel: extended physical RAM map: Nov 24 00:10:22.930344 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 24 00:10:22.930357 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000768c0017] usable Nov 24 00:10:22.930374 kernel: reserve setup_data: [mem 0x00000000768c0018-0x00000000768c8e57] usable Nov 24 00:10:22.930387 kernel: reserve setup_data: [mem 0x00000000768c8e58-0x00000000786cdfff] usable Nov 24 00:10:22.930399 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 24 00:10:22.930411 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 24 00:10:22.930422 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 24 00:10:22.930436 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 24 00:10:22.930450 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 24 00:10:22.930463 kernel: efi: EFI v2.7 by EDK II Nov 24 00:10:22.930477 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77002518 Nov 24 00:10:22.930490 kernel: secureboot: Secure boot disabled Nov 24 00:10:22.930504 kernel: SMBIOS 2.7 present. Nov 24 00:10:22.930521 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 24 00:10:22.930534 kernel: DMI: Memory slots populated: 1/1 Nov 24 00:10:22.930567 kernel: Hypervisor detected: KVM Nov 24 00:10:22.930581 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 24 00:10:22.930596 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 00:10:22.930609 kernel: kvm-clock: using sched offset of 5941248064 cycles Nov 24 00:10:22.930622 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 00:10:22.930635 kernel: tsc: Detected 2499.998 MHz processor Nov 24 00:10:22.930648 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 00:10:22.930660 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 00:10:22.930677 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 24 00:10:22.930689 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 24 00:10:22.930703 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 00:10:22.930722 kernel: Using GB pages for direct mapping Nov 24 00:10:22.930736 kernel: ACPI: Early table checksum verification disabled Nov 24 00:10:22.930749 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 24 00:10:22.930763 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 24 00:10:22.930779 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 24 00:10:22.930792 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 24 00:10:22.930806 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 24 00:10:22.930819 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 24 00:10:22.930831 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 24 00:10:22.930844 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 24 00:10:22.930858 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 24 00:10:22.930873 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 24 00:10:22.930890 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 24 00:10:22.930905 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 24 00:10:22.930920 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 24 00:10:22.930934 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 24 00:10:22.930948 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 24 00:10:22.930964 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 24 00:10:22.930978 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 24 00:10:22.930993 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 24 00:10:22.931010 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 24 00:10:22.931023 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 24 00:10:22.931035 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 24 00:10:22.931047 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 24 00:10:22.931060 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 24 00:10:22.931075 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 24 00:10:22.931090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 24 00:10:22.931105 kernel: NUMA: Initialized distance table, cnt=1 Nov 24 00:10:22.931120 kernel: NODE_DATA(0) allocated [mem 0x7a8eddc0-0x7a8f4fff] Nov 24 00:10:22.931137 kernel: Zone ranges: Nov 24 00:10:22.931151 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 00:10:22.931162 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 24 00:10:22.931174 kernel: Normal empty Nov 24 00:10:22.931185 kernel: Device empty Nov 24 00:10:22.931197 kernel: Movable zone start for each node Nov 24 00:10:22.931209 kernel: Early memory node ranges Nov 24 00:10:22.931222 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 24 00:10:22.931234 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 24 00:10:22.931258 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 24 00:10:22.931273 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 24 00:10:22.931285 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 00:10:22.931297 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 24 00:10:22.931310 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 24 00:10:22.931322 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 24 00:10:22.931337 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 24 00:10:22.931351 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 00:10:22.931363 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 24 00:10:22.931377 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 00:10:22.931392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 00:10:22.931405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 00:10:22.931419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 00:10:22.931432 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 00:10:22.931446 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 00:10:22.931458 kernel: TSC deadline timer available Nov 24 00:10:22.931471 kernel: CPU topo: Max. logical packages: 1 Nov 24 00:10:22.931485 kernel: CPU topo: Max. logical dies: 1 Nov 24 00:10:22.931498 kernel: CPU topo: Max. dies per package: 1 Nov 24 00:10:22.931515 kernel: CPU topo: Max. threads per core: 2 Nov 24 00:10:22.931530 kernel: CPU topo: Num. cores per package: 1 Nov 24 00:10:22.931559 kernel: CPU topo: Num. threads per package: 2 Nov 24 00:10:22.931574 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 00:10:22.931588 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 00:10:22.931602 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 24 00:10:22.931616 kernel: Booting paravirtualized kernel on KVM Nov 24 00:10:22.931631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 00:10:22.931645 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 00:10:22.931660 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 00:10:22.931678 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 00:10:22.931691 kernel: pcpu-alloc: [0] 0 1 Nov 24 00:10:22.931705 kernel: kvm-guest: PV spinlocks enabled Nov 24 00:10:22.931719 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 00:10:22.931736 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:10:22.931751 kernel: random: crng init done Nov 24 00:10:22.931764 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 00:10:22.931779 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 24 00:10:22.931795 kernel: Fallback order for Node 0: 0 Nov 24 00:10:22.931809 kernel: Built 1 zonelists, mobility grouping on. Total pages: 509451 Nov 24 00:10:22.931824 kernel: Policy zone: DMA32 Nov 24 00:10:22.931857 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 00:10:22.931874 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 00:10:22.931889 kernel: Kernel/User page tables isolation: enabled Nov 24 00:10:22.931904 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 00:10:22.931919 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 00:10:22.931934 kernel: Dynamic Preempt: voluntary Nov 24 00:10:22.931949 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 00:10:22.931965 kernel: rcu: RCU event tracing is enabled. Nov 24 00:10:22.931980 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 00:10:22.931999 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 00:10:22.932014 kernel: Rude variant of Tasks RCU enabled. Nov 24 00:10:22.932029 kernel: Tracing variant of Tasks RCU enabled. Nov 24 00:10:22.932044 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 00:10:22.932058 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 00:10:22.932076 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:10:22.932091 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:10:22.932107 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 00:10:22.932122 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 24 00:10:22.932137 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 00:10:22.932152 kernel: Console: colour dummy device 80x25 Nov 24 00:10:22.932166 kernel: printk: legacy console [tty0] enabled Nov 24 00:10:22.932181 kernel: printk: legacy console [ttyS0] enabled Nov 24 00:10:22.932199 kernel: ACPI: Core revision 20240827 Nov 24 00:10:22.932215 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 24 00:10:22.932229 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 00:10:22.932243 kernel: x2apic enabled Nov 24 00:10:22.932257 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 00:10:22.932272 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 24 00:10:22.932286 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 24 00:10:22.932299 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 24 00:10:22.932313 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 24 00:10:22.932331 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 00:10:22.932345 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 00:10:22.932359 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 00:10:22.932373 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 24 00:10:22.932400 kernel: RETBleed: Vulnerable Nov 24 00:10:22.932416 kernel: Speculative Store Bypass: Vulnerable Nov 24 00:10:22.932429 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 00:10:22.932441 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 00:10:22.932455 kernel: GDS: Unknown: Dependent on hypervisor status Nov 24 00:10:22.932470 kernel: active return thunk: its_return_thunk Nov 24 00:10:22.932483 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 24 00:10:22.932504 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 00:10:22.932517 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 00:10:22.932530 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 00:10:22.932565 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 24 00:10:22.932581 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 24 00:10:22.932597 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 24 00:10:22.932612 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 24 00:10:22.932628 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 24 00:10:22.932644 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 24 00:10:22.932660 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 00:10:22.932675 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 24 00:10:22.932695 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 24 00:10:22.932710 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 24 00:10:22.932725 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 24 00:10:22.932741 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 24 00:10:22.932756 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 24 00:10:22.932772 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 24 00:10:22.932786 kernel: Freeing SMP alternatives memory: 32K Nov 24 00:10:22.932800 kernel: pid_max: default: 32768 minimum: 301 Nov 24 00:10:22.932816 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 00:10:22.932832 kernel: landlock: Up and running. Nov 24 00:10:22.932847 kernel: SELinux: Initializing. Nov 24 00:10:22.932863 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 00:10:22.932882 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 00:10:22.932896 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 24 00:10:22.932911 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 24 00:10:22.932927 kernel: signal: max sigframe size: 3632 Nov 24 00:10:22.932942 kernel: rcu: Hierarchical SRCU implementation. Nov 24 00:10:22.932957 kernel: rcu: Max phase no-delay instances is 400. Nov 24 00:10:22.932972 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 00:10:22.932987 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 24 00:10:22.933002 kernel: smp: Bringing up secondary CPUs ... Nov 24 00:10:22.933020 kernel: smpboot: x86: Booting SMP configuration: Nov 24 00:10:22.933035 kernel: .... node #0, CPUs: #1 Nov 24 00:10:22.933051 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 24 00:10:22.933067 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 24 00:10:22.933082 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 00:10:22.933097 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 24 00:10:22.933112 kernel: Memory: 1899860K/2037804K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 133380K reserved, 0K cma-reserved) Nov 24 00:10:22.933128 kernel: devtmpfs: initialized Nov 24 00:10:22.933142 kernel: x86/mm: Memory block size: 128MB Nov 24 00:10:22.933160 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 24 00:10:22.933175 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 00:10:22.933190 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 00:10:22.933205 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 00:10:22.933220 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 00:10:22.933235 kernel: audit: initializing netlink subsys (disabled) Nov 24 00:10:22.933249 kernel: audit: type=2000 audit(1763943021.435:1): state=initialized audit_enabled=0 res=1 Nov 24 00:10:22.933265 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 00:10:22.933283 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 00:10:22.933298 kernel: cpuidle: using governor menu Nov 24 00:10:22.933312 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 00:10:22.933327 kernel: dca service started, version 1.12.1 Nov 24 00:10:22.933342 kernel: PCI: Using configuration type 1 for base access Nov 24 00:10:22.933357 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 00:10:22.933372 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 00:10:22.933387 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 00:10:22.933402 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 00:10:22.933419 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 00:10:22.933435 kernel: ACPI: Added _OSI(Module Device) Nov 24 00:10:22.933449 kernel: ACPI: Added _OSI(Processor Device) Nov 24 00:10:22.933464 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 00:10:22.933480 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 24 00:10:22.933494 kernel: ACPI: Interpreter enabled Nov 24 00:10:22.933509 kernel: ACPI: PM: (supports S0 S5) Nov 24 00:10:22.933524 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 00:10:22.933539 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 00:10:22.934316 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 00:10:22.934337 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 24 00:10:22.934352 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 00:10:22.934578 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 24 00:10:22.934712 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 24 00:10:22.934837 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 24 00:10:22.934854 kernel: acpiphp: Slot [3] registered Nov 24 00:10:22.934868 kernel: acpiphp: Slot [4] registered Nov 24 00:10:22.934886 kernel: acpiphp: Slot [5] registered Nov 24 00:10:22.934900 kernel: acpiphp: Slot [6] registered Nov 24 00:10:22.934914 kernel: acpiphp: Slot [7] registered Nov 24 00:10:22.934928 kernel: acpiphp: Slot [8] registered Nov 24 00:10:22.934941 kernel: acpiphp: Slot [9] registered Nov 24 00:10:22.934955 kernel: acpiphp: Slot [10] registered Nov 24 00:10:22.934969 kernel: acpiphp: Slot [11] registered Nov 24 00:10:22.934983 kernel: acpiphp: Slot [12] registered Nov 24 00:10:22.934997 kernel: acpiphp: Slot [13] registered Nov 24 00:10:22.935013 kernel: acpiphp: Slot [14] registered Nov 24 00:10:22.935027 kernel: acpiphp: Slot [15] registered Nov 24 00:10:22.935041 kernel: acpiphp: Slot [16] registered Nov 24 00:10:22.935055 kernel: acpiphp: Slot [17] registered Nov 24 00:10:22.935069 kernel: acpiphp: Slot [18] registered Nov 24 00:10:22.935083 kernel: acpiphp: Slot [19] registered Nov 24 00:10:22.935096 kernel: acpiphp: Slot [20] registered Nov 24 00:10:22.935110 kernel: acpiphp: Slot [21] registered Nov 24 00:10:22.935123 kernel: acpiphp: Slot [22] registered Nov 24 00:10:22.935137 kernel: acpiphp: Slot [23] registered Nov 24 00:10:22.935153 kernel: acpiphp: Slot [24] registered Nov 24 00:10:22.935167 kernel: acpiphp: Slot [25] registered Nov 24 00:10:22.935181 kernel: acpiphp: Slot [26] registered Nov 24 00:10:22.935195 kernel: acpiphp: Slot [27] registered Nov 24 00:10:22.935209 kernel: acpiphp: Slot [28] registered Nov 24 00:10:22.935223 kernel: acpiphp: Slot [29] registered Nov 24 00:10:22.935237 kernel: acpiphp: Slot [30] registered Nov 24 00:10:22.935251 kernel: acpiphp: Slot [31] registered Nov 24 00:10:22.935265 kernel: PCI host bridge to bus 0000:00 Nov 24 00:10:22.935396 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 00:10:22.935517 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 00:10:22.935643 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 00:10:22.935758 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 24 00:10:22.935881 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 24 00:10:22.935994 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 00:10:22.936652 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 24 00:10:22.936847 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 24 00:10:22.936998 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 conventional PCI endpoint Nov 24 00:10:22.937134 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 24 00:10:22.937268 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 24 00:10:22.937401 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 24 00:10:22.937537 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 24 00:10:22.939757 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 24 00:10:22.939924 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 24 00:10:22.940061 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 24 00:10:22.940208 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 conventional PCI endpoint Nov 24 00:10:22.940346 kernel: pci 0000:00:03.0: BAR 0 [mem 0x80000000-0x803fffff pref] Nov 24 00:10:22.940480 kernel: pci 0000:00:03.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 24 00:10:22.942791 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 00:10:22.942973 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Endpoint Nov 24 00:10:22.943113 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80404000-0x80407fff] Nov 24 00:10:22.943257 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Endpoint Nov 24 00:10:22.943390 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80400000-0x80403fff] Nov 24 00:10:22.943410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 00:10:22.943426 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 00:10:22.943442 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 00:10:22.943462 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 00:10:22.943477 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 24 00:10:22.943493 kernel: iommu: Default domain type: Translated Nov 24 00:10:22.943507 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 00:10:22.943522 kernel: efivars: Registered efivars operations Nov 24 00:10:22.943537 kernel: PCI: Using ACPI for IRQ routing Nov 24 00:10:22.945598 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 00:10:22.945624 kernel: e820: reserve RAM buffer [mem 0x768c0018-0x77ffffff] Nov 24 00:10:22.945639 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 24 00:10:22.945659 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 24 00:10:22.945842 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 24 00:10:22.945977 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 24 00:10:22.948031 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 00:10:22.948069 kernel: vgaarb: loaded Nov 24 00:10:22.948085 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 24 00:10:22.948101 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 24 00:10:22.948116 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 00:10:22.948138 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 00:10:22.948153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 00:10:22.948168 kernel: pnp: PnP ACPI init Nov 24 00:10:22.948184 kernel: pnp: PnP ACPI: found 5 devices Nov 24 00:10:22.948199 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 00:10:22.948214 kernel: NET: Registered PF_INET protocol family Nov 24 00:10:22.948230 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 00:10:22.948246 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 24 00:10:22.948261 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 00:10:22.948280 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 00:10:22.948295 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 24 00:10:22.948311 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 24 00:10:22.948326 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 00:10:22.948342 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 00:10:22.948357 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 00:10:22.948372 kernel: NET: Registered PF_XDP protocol family Nov 24 00:10:22.948530 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 00:10:22.949758 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 00:10:22.949892 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 00:10:22.950011 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 24 00:10:22.950126 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 24 00:10:22.950267 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 24 00:10:22.950286 kernel: PCI: CLS 0 bytes, default 64 Nov 24 00:10:22.950300 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 24 00:10:22.950315 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 24 00:10:22.950329 kernel: clocksource: Switched to clocksource tsc Nov 24 00:10:22.950347 kernel: Initialise system trusted keyrings Nov 24 00:10:22.950362 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 24 00:10:22.950376 kernel: Key type asymmetric registered Nov 24 00:10:22.950390 kernel: Asymmetric key parser 'x509' registered Nov 24 00:10:22.950403 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 00:10:22.950418 kernel: io scheduler mq-deadline registered Nov 24 00:10:22.950433 kernel: io scheduler kyber registered Nov 24 00:10:22.950446 kernel: io scheduler bfq registered Nov 24 00:10:22.950460 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 00:10:22.950476 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 00:10:22.950491 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 00:10:22.950505 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 00:10:22.950519 kernel: i8042: Warning: Keylock active Nov 24 00:10:22.950534 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 00:10:22.950561 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 00:10:22.950699 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 24 00:10:22.950820 kernel: rtc_cmos 00:00: registered as rtc0 Nov 24 00:10:22.950944 kernel: rtc_cmos 00:00: setting system clock to 2025-11-24T00:10:22 UTC (1763943022) Nov 24 00:10:22.951059 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 24 00:10:22.951097 kernel: intel_pstate: CPU model not supported Nov 24 00:10:22.951114 kernel: efifb: probing for efifb Nov 24 00:10:22.951131 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Nov 24 00:10:22.951148 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 24 00:10:22.951165 kernel: efifb: scrolling: redraw Nov 24 00:10:22.951180 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 24 00:10:22.951199 kernel: Console: switching to colour frame buffer device 100x37 Nov 24 00:10:22.951216 kernel: fb0: EFI VGA frame buffer device Nov 24 00:10:22.951233 kernel: pstore: Using crash dump compression: deflate Nov 24 00:10:22.951254 kernel: pstore: Registered efi_pstore as persistent store backend Nov 24 00:10:22.951271 kernel: NET: Registered PF_INET6 protocol family Nov 24 00:10:22.951288 kernel: Segment Routing with IPv6 Nov 24 00:10:22.951304 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 00:10:22.951322 kernel: NET: Registered PF_PACKET protocol family Nov 24 00:10:22.951339 kernel: Key type dns_resolver registered Nov 24 00:10:22.951355 kernel: IPI shorthand broadcast: enabled Nov 24 00:10:22.951375 kernel: sched_clock: Marking stable (2923003363, 200801886)->(3228350093, -104544844) Nov 24 00:10:22.951392 kernel: registered taskstats version 1 Nov 24 00:10:22.951409 kernel: Loading compiled-in X.509 certificates Nov 24 00:10:22.951427 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 00:10:22.951443 kernel: Demotion targets for Node 0: null Nov 24 00:10:22.951460 kernel: Key type .fscrypt registered Nov 24 00:10:22.951477 kernel: Key type fscrypt-provisioning registered Nov 24 00:10:22.951494 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 00:10:22.951510 kernel: ima: Allocated hash algorithm: sha1 Nov 24 00:10:22.951530 kernel: ima: No architecture policies found Nov 24 00:10:22.953594 kernel: clk: Disabling unused clocks Nov 24 00:10:22.953622 kernel: Warning: unable to open an initial console. Nov 24 00:10:22.953640 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 00:10:22.953658 kernel: Write protecting the kernel read-only data: 40960k Nov 24 00:10:22.953684 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 00:10:22.953701 kernel: Run /init as init process Nov 24 00:10:22.953718 kernel: with arguments: Nov 24 00:10:22.953736 kernel: /init Nov 24 00:10:22.953752 kernel: with environment: Nov 24 00:10:22.953769 kernel: HOME=/ Nov 24 00:10:22.953785 kernel: TERM=linux Nov 24 00:10:22.953805 systemd[1]: Successfully made /usr/ read-only. Nov 24 00:10:22.953827 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:10:22.953849 systemd[1]: Detected virtualization amazon. Nov 24 00:10:22.953866 systemd[1]: Detected architecture x86-64. Nov 24 00:10:22.953883 systemd[1]: Running in initrd. Nov 24 00:10:22.953900 systemd[1]: No hostname configured, using default hostname. Nov 24 00:10:22.953918 systemd[1]: Hostname set to . Nov 24 00:10:22.953935 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:10:22.953952 systemd[1]: Queued start job for default target initrd.target. Nov 24 00:10:22.953973 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:10:22.953990 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:10:22.954009 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 00:10:22.954028 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:10:22.954045 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 00:10:22.954065 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 00:10:22.954084 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 00:10:22.954105 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 00:10:22.954123 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:10:22.954140 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:10:22.954158 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:10:22.954176 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:10:22.954193 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:10:22.954211 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:10:22.954229 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:10:22.954249 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:10:22.954267 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 00:10:22.954285 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 00:10:22.954303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:10:22.954320 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:10:22.954338 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:10:22.954355 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:10:22.954372 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 00:10:22.954390 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:10:22.954411 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 00:10:22.954429 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 00:10:22.954447 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 00:10:22.954465 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:10:22.954483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:10:22.954500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:10:22.954521 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 00:10:22.954543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:10:22.954618 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 00:10:22.954636 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:10:22.954704 systemd-journald[188]: Collecting audit messages is disabled. Nov 24 00:10:22.954748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:10:22.954767 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 00:10:22.954787 systemd-journald[188]: Journal started Nov 24 00:10:22.954827 systemd-journald[188]: Runtime Journal (/run/log/journal/ec2a737fd8339f441276fd27a0bee835) is 4.7M, max 38.1M, 33.3M free. Nov 24 00:10:22.929865 systemd-modules-load[189]: Inserted module 'overlay' Nov 24 00:10:22.962581 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:10:22.968807 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:10:22.975711 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:10:22.979872 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 00:10:22.986036 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:10:22.991200 kernel: Bridge firewalling registered Nov 24 00:10:22.986797 systemd-modules-load[189]: Inserted module 'br_netfilter' Nov 24 00:10:22.993026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:10:22.997701 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:10:23.006606 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 00:10:23.010240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:10:23.012050 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:10:23.016060 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 00:10:23.023722 systemd-tmpfiles[207]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 00:10:23.028220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:10:23.031190 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:10:23.036728 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:10:23.047403 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 00:10:23.098646 systemd-resolved[230]: Positive Trust Anchors: Nov 24 00:10:23.099586 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:10:23.099648 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:10:23.107254 systemd-resolved[230]: Defaulting to hostname 'linux'. Nov 24 00:10:23.110132 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:10:23.110813 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:10:23.150591 kernel: SCSI subsystem initialized Nov 24 00:10:23.160581 kernel: Loading iSCSI transport class v2.0-870. Nov 24 00:10:23.173584 kernel: iscsi: registered transport (tcp) Nov 24 00:10:23.196595 kernel: iscsi: registered transport (qla4xxx) Nov 24 00:10:23.196729 kernel: QLogic iSCSI HBA Driver Nov 24 00:10:23.219254 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:10:23.241377 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:10:23.242516 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:10:23.291810 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 00:10:23.294368 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 00:10:23.352581 kernel: raid6: avx512x4 gen() 18093 MB/s Nov 24 00:10:23.370605 kernel: raid6: avx512x2 gen() 18240 MB/s Nov 24 00:10:23.388582 kernel: raid6: avx512x1 gen() 18228 MB/s Nov 24 00:10:23.406601 kernel: raid6: avx2x4 gen() 17964 MB/s Nov 24 00:10:23.424581 kernel: raid6: avx2x2 gen() 18052 MB/s Nov 24 00:10:23.443009 kernel: raid6: avx2x1 gen() 14106 MB/s Nov 24 00:10:23.443070 kernel: raid6: using algorithm avx512x2 gen() 18240 MB/s Nov 24 00:10:23.462017 kernel: raid6: .... xor() 24051 MB/s, rmw enabled Nov 24 00:10:23.462089 kernel: raid6: using avx512x2 recovery algorithm Nov 24 00:10:23.483595 kernel: xor: automatically using best checksumming function avx Nov 24 00:10:23.654584 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 00:10:23.661106 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:10:23.663375 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:10:23.695218 systemd-udevd[436]: Using default interface naming scheme 'v255'. Nov 24 00:10:23.701995 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:10:23.705810 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 00:10:23.739678 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Nov 24 00:10:23.768181 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:10:23.770274 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:10:23.843969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:10:23.847691 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 00:10:23.930740 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 24 00:10:23.931019 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 24 00:10:23.936577 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 24 00:10:23.951575 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:40:1c:45:4c:41 Nov 24 00:10:23.951824 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 00:10:23.963576 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 24 00:10:23.966569 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 24 00:10:23.977051 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input2 Nov 24 00:10:23.985579 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 24 00:10:23.985834 kernel: AES CTR mode by8 optimization enabled Nov 24 00:10:24.002351 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 00:10:24.002417 kernel: GPT:9289727 != 33554431 Nov 24 00:10:24.002437 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 00:10:24.002456 kernel: GPT:9289727 != 33554431 Nov 24 00:10:24.002473 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 00:10:24.002499 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:10:24.008491 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:10:24.008598 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:10:24.011147 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:10:24.017180 (udev-worker)[489]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:10:24.017363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:10:24.018372 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:10:24.062066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:10:24.074626 kernel: nvme nvme0: using unchecked data buffer Nov 24 00:10:24.207975 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 24 00:10:24.229168 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 00:10:24.240582 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 24 00:10:24.252621 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 24 00:10:24.253235 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 24 00:10:24.265692 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 24 00:10:24.266438 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:10:24.267678 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:10:24.268970 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:10:24.270766 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 00:10:24.273832 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 00:10:24.297093 disk-uuid[669]: Primary Header is updated. Nov 24 00:10:24.297093 disk-uuid[669]: Secondary Entries is updated. Nov 24 00:10:24.297093 disk-uuid[669]: Secondary Header is updated. Nov 24 00:10:24.302907 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:10:24.306603 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:10:25.317626 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 24 00:10:25.318327 disk-uuid[672]: The operation has completed successfully. Nov 24 00:10:25.467064 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 00:10:25.467191 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 00:10:25.505735 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 00:10:25.519147 sh[937]: Success Nov 24 00:10:25.547094 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 00:10:25.547186 kernel: device-mapper: uevent: version 1.0.3 Nov 24 00:10:25.547209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 00:10:25.560576 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Nov 24 00:10:25.658161 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 00:10:25.661642 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 00:10:25.679473 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 00:10:25.698580 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (960) Nov 24 00:10:25.701651 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 00:10:25.701715 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:10:25.786710 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 24 00:10:25.786782 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 00:10:25.786800 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 00:10:25.813608 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 00:10:25.814847 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:10:25.815504 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 00:10:25.816801 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 00:10:25.819498 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 00:10:25.864587 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (993) Nov 24 00:10:25.868590 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:10:25.868656 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:10:25.887639 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:10:25.887722 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:10:25.895627 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:10:25.897076 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 00:10:25.900298 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 00:10:25.943490 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:10:25.946281 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:10:25.992373 systemd-networkd[1129]: lo: Link UP Nov 24 00:10:25.992388 systemd-networkd[1129]: lo: Gained carrier Nov 24 00:10:25.994657 systemd-networkd[1129]: Enumeration completed Nov 24 00:10:25.994785 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:10:25.995713 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:10:25.995719 systemd-networkd[1129]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:10:25.997016 systemd[1]: Reached target network.target - Network. Nov 24 00:10:26.000304 systemd-networkd[1129]: eth0: Link UP Nov 24 00:10:26.000310 systemd-networkd[1129]: eth0: Gained carrier Nov 24 00:10:26.000330 systemd-networkd[1129]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:10:26.041690 systemd-networkd[1129]: eth0: DHCPv4 address 172.31.17.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 24 00:10:26.473739 ignition[1078]: Ignition 2.22.0 Nov 24 00:10:26.473757 ignition[1078]: Stage: fetch-offline Nov 24 00:10:26.473988 ignition[1078]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:10:26.474000 ignition[1078]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:10:26.475045 ignition[1078]: Ignition finished successfully Nov 24 00:10:26.477785 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:10:26.479359 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 00:10:26.513682 ignition[1138]: Ignition 2.22.0 Nov 24 00:10:26.513697 ignition[1138]: Stage: fetch Nov 24 00:10:26.514090 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:10:26.514102 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:10:26.514232 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:10:26.582341 ignition[1138]: PUT result: OK Nov 24 00:10:26.585826 ignition[1138]: parsed url from cmdline: "" Nov 24 00:10:26.585839 ignition[1138]: no config URL provided Nov 24 00:10:26.585849 ignition[1138]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 00:10:26.585863 ignition[1138]: no config at "/usr/lib/ignition/user.ign" Nov 24 00:10:26.585888 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:10:26.586645 ignition[1138]: PUT result: OK Nov 24 00:10:26.586713 ignition[1138]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 24 00:10:26.587716 ignition[1138]: GET result: OK Nov 24 00:10:26.587807 ignition[1138]: parsing config with SHA512: d365e41e5232101c2529cb5d675722f2ab463a280f18c8395904739aa6050dab6f740897971f742758e3c6428e3227d4105bbe0875eaf539feeea1715c9f05b8 Nov 24 00:10:26.597310 unknown[1138]: fetched base config from "system" Nov 24 00:10:26.597327 unknown[1138]: fetched base config from "system" Nov 24 00:10:26.597907 ignition[1138]: fetch: fetch complete Nov 24 00:10:26.597335 unknown[1138]: fetched user config from "aws" Nov 24 00:10:26.597915 ignition[1138]: fetch: fetch passed Nov 24 00:10:26.597979 ignition[1138]: Ignition finished successfully Nov 24 00:10:26.601039 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 00:10:26.603136 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 00:10:26.655816 ignition[1144]: Ignition 2.22.0 Nov 24 00:10:26.655832 ignition[1144]: Stage: kargs Nov 24 00:10:26.656452 ignition[1144]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:10:26.656465 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:10:26.656606 ignition[1144]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:10:26.658593 ignition[1144]: PUT result: OK Nov 24 00:10:26.662289 ignition[1144]: kargs: kargs passed Nov 24 00:10:26.662375 ignition[1144]: Ignition finished successfully Nov 24 00:10:26.664859 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 00:10:26.666891 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 00:10:26.702038 ignition[1150]: Ignition 2.22.0 Nov 24 00:10:26.702053 ignition[1150]: Stage: disks Nov 24 00:10:26.702443 ignition[1150]: no configs at "/usr/lib/ignition/base.d" Nov 24 00:10:26.702457 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:10:26.702608 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:10:26.703467 ignition[1150]: PUT result: OK Nov 24 00:10:26.705891 ignition[1150]: disks: disks passed Nov 24 00:10:26.705971 ignition[1150]: Ignition finished successfully Nov 24 00:10:26.708093 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 00:10:26.708994 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 00:10:26.709336 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 00:10:26.709894 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:10:26.710448 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:10:26.711004 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:10:26.712764 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 00:10:26.753066 systemd-fsck[1158]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 00:10:26.756909 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 00:10:26.758692 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 00:10:26.946582 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 00:10:26.947422 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 00:10:26.948787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 00:10:26.950879 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:10:26.953430 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 00:10:26.956165 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 00:10:26.956883 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 00:10:26.956923 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:10:26.966912 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 00:10:26.969365 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 00:10:26.982593 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1177) Nov 24 00:10:26.986729 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:10:26.986780 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:10:26.997338 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:10:26.997434 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:10:26.998985 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:10:27.399171 initrd-setup-root[1201]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 00:10:27.440304 initrd-setup-root[1208]: cut: /sysroot/etc/group: No such file or directory Nov 24 00:10:27.445962 initrd-setup-root[1215]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 00:10:27.452805 initrd-setup-root[1222]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 00:10:27.729604 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 00:10:27.732673 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 00:10:27.735293 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 00:10:27.751133 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 00:10:27.754418 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:10:27.781946 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 00:10:27.793796 ignition[1290]: INFO : Ignition 2.22.0 Nov 24 00:10:27.793796 ignition[1290]: INFO : Stage: mount Nov 24 00:10:27.795325 ignition[1290]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:10:27.795325 ignition[1290]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:10:27.795325 ignition[1290]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:10:27.795325 ignition[1290]: INFO : PUT result: OK Nov 24 00:10:27.798099 ignition[1290]: INFO : mount: mount passed Nov 24 00:10:27.799524 ignition[1290]: INFO : Ignition finished successfully Nov 24 00:10:27.800383 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 00:10:27.802126 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 00:10:27.922745 systemd-networkd[1129]: eth0: Gained IPv6LL Nov 24 00:10:27.949654 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 00:10:27.982580 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1302) Nov 24 00:10:27.987667 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 00:10:27.987754 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 24 00:10:27.995292 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 24 00:10:27.995379 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 24 00:10:27.997830 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 00:10:28.037822 ignition[1318]: INFO : Ignition 2.22.0 Nov 24 00:10:28.037822 ignition[1318]: INFO : Stage: files Nov 24 00:10:28.039494 ignition[1318]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:10:28.039494 ignition[1318]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:10:28.039494 ignition[1318]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:10:28.041012 ignition[1318]: INFO : PUT result: OK Nov 24 00:10:28.042713 ignition[1318]: DEBUG : files: compiled without relabeling support, skipping Nov 24 00:10:28.044050 ignition[1318]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 00:10:28.045618 ignition[1318]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 00:10:28.068627 ignition[1318]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 00:10:28.069778 ignition[1318]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 00:10:28.069778 ignition[1318]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 00:10:28.069326 unknown[1318]: wrote ssh authorized keys file for user: core Nov 24 00:10:28.084926 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 24 00:10:28.086198 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 24 00:10:28.152621 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 00:10:28.444815 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 24 00:10:28.444815 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 00:10:28.446907 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 00:10:28.446907 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:10:28.446907 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 00:10:28.446907 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:10:28.446907 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 00:10:28.446907 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:10:28.446907 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 00:10:28.452797 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:10:28.452797 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 00:10:28.452797 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:10:28.455748 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:10:28.455748 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:10:28.455748 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 24 00:10:28.882338 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 00:10:29.579064 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 24 00:10:29.579064 ignition[1318]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 00:10:29.593584 ignition[1318]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:10:29.598122 ignition[1318]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 00:10:29.598122 ignition[1318]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 00:10:29.598122 ignition[1318]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 24 00:10:29.602609 ignition[1318]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 00:10:29.602609 ignition[1318]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:10:29.602609 ignition[1318]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 00:10:29.602609 ignition[1318]: INFO : files: files passed Nov 24 00:10:29.602609 ignition[1318]: INFO : Ignition finished successfully Nov 24 00:10:29.600249 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 00:10:29.602426 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 00:10:29.604433 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 00:10:29.620760 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 00:10:29.620907 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 00:10:29.636236 initrd-setup-root-after-ignition[1349]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:10:29.636236 initrd-setup-root-after-ignition[1349]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:10:29.640098 initrd-setup-root-after-ignition[1353]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 00:10:29.641767 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:10:29.642825 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 00:10:29.644864 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 00:10:29.695187 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 00:10:29.695329 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 00:10:29.696749 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 00:10:29.698979 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 00:10:29.699998 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 00:10:29.701175 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 00:10:29.725346 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:10:29.727624 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 00:10:29.755242 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:10:29.756222 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:10:29.757248 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 00:10:29.758124 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 00:10:29.758450 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 00:10:29.759656 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 00:10:29.760698 systemd[1]: Stopped target basic.target - Basic System. Nov 24 00:10:29.761506 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 00:10:29.762327 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 00:10:29.763091 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 00:10:29.764070 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 00:10:29.764880 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 00:10:29.765678 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 00:10:29.766483 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 00:10:29.767607 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 00:10:29.768520 systemd[1]: Stopped target swap.target - Swaps. Nov 24 00:10:29.769270 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 00:10:29.769500 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 00:10:29.770530 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:10:29.771385 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:10:29.772195 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 00:10:29.772492 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:10:29.773024 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 00:10:29.773242 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 00:10:29.774579 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 00:10:29.774772 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 00:10:29.775532 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 00:10:29.775755 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 00:10:29.778682 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 00:10:29.781892 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 00:10:29.782400 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 00:10:29.782651 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:10:29.786842 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 00:10:29.787014 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 00:10:29.795309 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 00:10:29.795438 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 00:10:29.820287 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 00:10:29.822665 ignition[1373]: INFO : Ignition 2.22.0 Nov 24 00:10:29.822665 ignition[1373]: INFO : Stage: umount Nov 24 00:10:29.822665 ignition[1373]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 00:10:29.822665 ignition[1373]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 24 00:10:29.822665 ignition[1373]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 24 00:10:29.825475 ignition[1373]: INFO : PUT result: OK Nov 24 00:10:29.827308 ignition[1373]: INFO : umount: umount passed Nov 24 00:10:29.827308 ignition[1373]: INFO : Ignition finished successfully Nov 24 00:10:29.830240 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 00:10:29.830405 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 00:10:29.831332 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 00:10:29.831396 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 00:10:29.832495 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 00:10:29.832577 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 00:10:29.833189 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 00:10:29.833249 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 00:10:29.833915 systemd[1]: Stopped target network.target - Network. Nov 24 00:10:29.834496 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 00:10:29.834573 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 00:10:29.835182 systemd[1]: Stopped target paths.target - Path Units. Nov 24 00:10:29.835789 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 00:10:29.840642 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:10:29.841180 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 00:10:29.842158 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 00:10:29.842862 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 00:10:29.842908 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 00:10:29.843675 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 00:10:29.843720 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 00:10:29.845090 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 00:10:29.845176 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 00:10:29.846916 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 00:10:29.846988 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 00:10:29.847772 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 00:10:29.849082 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 00:10:29.857944 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 00:10:29.858105 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 00:10:29.863471 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 00:10:29.863908 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 00:10:29.864058 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 00:10:29.866489 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 00:10:29.867966 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 00:10:29.868630 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 00:10:29.868686 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:10:29.870401 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 00:10:29.870958 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 00:10:29.871035 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 00:10:29.871668 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 00:10:29.871731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:10:29.872667 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 00:10:29.872725 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 00:10:29.873279 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 00:10:29.873337 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:10:29.874167 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:10:29.881652 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 00:10:29.881773 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:10:29.889299 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 00:10:29.890778 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:10:29.892097 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 00:10:29.892156 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 00:10:29.893796 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 00:10:29.893846 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:10:29.894511 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 00:10:29.894696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 00:10:29.895780 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 00:10:29.895939 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 00:10:29.897079 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 00:10:29.897143 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 00:10:29.899223 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 00:10:29.901630 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 00:10:29.901709 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:10:29.903079 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 00:10:29.903143 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:10:29.905235 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 24 00:10:29.905297 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:10:29.906153 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 00:10:29.906214 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:10:29.907368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 00:10:29.907424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:10:29.910820 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 24 00:10:29.910895 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 24 00:10:29.910961 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 24 00:10:29.911017 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 00:10:29.923285 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 00:10:29.923432 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 00:10:29.929206 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 00:10:29.929323 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 00:10:30.044972 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 00:10:30.045124 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 00:10:30.047510 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 00:10:30.048465 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 00:10:30.048600 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 00:10:30.050526 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 00:10:30.070910 systemd[1]: Switching root. Nov 24 00:10:30.123625 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Nov 24 00:10:30.123725 systemd-journald[188]: Journal stopped Nov 24 00:10:32.103495 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 00:10:32.103629 kernel: SELinux: policy capability open_perms=1 Nov 24 00:10:32.103654 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 00:10:32.103676 kernel: SELinux: policy capability always_check_network=0 Nov 24 00:10:32.103697 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 00:10:32.103729 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 00:10:32.103750 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 00:10:32.103770 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 00:10:32.103789 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 00:10:32.103810 kernel: audit: type=1403 audit(1763943030.563:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 00:10:32.103848 systemd[1]: Successfully loaded SELinux policy in 84.464ms. Nov 24 00:10:32.103892 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.273ms. Nov 24 00:10:32.103916 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 00:10:32.103940 systemd[1]: Detected virtualization amazon. Nov 24 00:10:32.103965 systemd[1]: Detected architecture x86-64. Nov 24 00:10:32.103985 systemd[1]: Detected first boot. Nov 24 00:10:32.104008 systemd[1]: Initializing machine ID from VM UUID. Nov 24 00:10:32.104030 zram_generator::config[1417]: No configuration found. Nov 24 00:10:32.104053 kernel: Guest personality initialized and is inactive Nov 24 00:10:32.104073 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 00:10:32.104094 kernel: Initialized host personality Nov 24 00:10:32.104113 kernel: NET: Registered PF_VSOCK protocol family Nov 24 00:10:32.104138 systemd[1]: Populated /etc with preset unit settings. Nov 24 00:10:32.104158 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 00:10:32.104177 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 00:10:32.104198 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 00:10:32.104219 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 00:10:32.104247 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 00:10:32.104268 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 00:10:32.104289 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 00:10:32.104311 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 00:10:32.104335 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 00:10:32.104356 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 00:10:32.104378 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 00:10:32.104396 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 00:10:32.104414 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 00:10:32.104432 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 00:10:32.104451 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 00:10:32.104470 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 00:10:32.104489 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 00:10:32.104512 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 00:10:32.104530 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 00:10:32.104572 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 00:10:32.104591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 00:10:32.104609 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 00:10:32.104629 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 00:10:32.104649 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 00:10:32.104672 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 00:10:32.104693 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 00:10:32.104716 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 00:10:32.104735 systemd[1]: Reached target slices.target - Slice Units. Nov 24 00:10:32.104752 systemd[1]: Reached target swap.target - Swaps. Nov 24 00:10:32.104773 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 00:10:32.104793 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 00:10:32.104815 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 00:10:32.104836 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 00:10:32.104860 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 00:10:32.104886 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 00:10:32.104909 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 00:10:32.104928 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 00:10:32.104950 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 00:10:32.104973 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 00:10:32.104993 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:10:32.105013 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 00:10:32.105034 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 00:10:32.105057 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 00:10:32.105079 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 00:10:32.105100 systemd[1]: Reached target machines.target - Containers. Nov 24 00:10:32.105120 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 00:10:32.105141 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:10:32.105161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 00:10:32.105182 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 00:10:32.105203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:10:32.105223 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:10:32.105246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:10:32.105267 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 00:10:32.105287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:10:32.105308 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 00:10:32.105329 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 00:10:32.105349 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 00:10:32.105369 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 00:10:32.105389 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 00:10:32.105415 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:10:32.105435 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 00:10:32.105454 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 00:10:32.105475 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 00:10:32.105494 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 00:10:32.105513 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 00:10:32.105536 kernel: fuse: init (API version 7.41) Nov 24 00:10:32.105631 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 00:10:32.105651 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 00:10:32.105670 systemd[1]: Stopped verity-setup.service. Nov 24 00:10:32.105690 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:10:32.105714 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 00:10:32.105733 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 00:10:32.105752 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 00:10:32.105771 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 00:10:32.105790 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 00:10:32.105808 kernel: loop: module loaded Nov 24 00:10:32.105826 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 00:10:32.105845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 00:10:32.105867 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 00:10:32.105886 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 00:10:32.105905 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:10:32.105924 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:10:32.105943 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:10:32.105962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:10:32.105981 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 00:10:32.106000 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 00:10:32.106019 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 00:10:32.106040 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 00:10:32.106059 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 00:10:32.106077 kernel: ACPI: bus type drm_connector registered Nov 24 00:10:32.106095 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 00:10:32.106158 systemd-journald[1500]: Collecting audit messages is disabled. Nov 24 00:10:32.106197 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:10:32.106219 systemd-journald[1500]: Journal started Nov 24 00:10:32.106255 systemd-journald[1500]: Runtime Journal (/run/log/journal/ec2a737fd8339f441276fd27a0bee835) is 4.7M, max 38.1M, 33.3M free. Nov 24 00:10:31.646427 systemd[1]: Queued start job for default target multi-user.target. Nov 24 00:10:31.656857 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 24 00:10:31.657543 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 00:10:32.109596 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:10:32.112617 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 00:10:32.114610 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:10:32.115068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:10:32.116522 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 00:10:32.135602 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 00:10:32.139679 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 00:10:32.143670 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 00:10:32.145838 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 00:10:32.145899 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 00:10:32.150323 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 00:10:32.159783 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 00:10:32.162789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:10:32.166923 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 00:10:32.170789 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 00:10:32.171590 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:10:32.179115 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 00:10:32.181612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:10:32.182920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 00:10:32.192792 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 00:10:32.196849 systemd-journald[1500]: Time spent on flushing to /var/log/journal/ec2a737fd8339f441276fd27a0bee835 is 107.649ms for 1015 entries. Nov 24 00:10:32.196849 systemd-journald[1500]: System Journal (/var/log/journal/ec2a737fd8339f441276fd27a0bee835) is 8M, max 195.6M, 187.6M free. Nov 24 00:10:32.325684 systemd-journald[1500]: Received client request to flush runtime journal. Nov 24 00:10:32.325740 kernel: loop0: detected capacity change from 0 to 224512 Nov 24 00:10:32.201885 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 00:10:32.205969 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 00:10:32.208846 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 00:10:32.246517 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 00:10:32.247403 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 00:10:32.253259 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 00:10:32.276321 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Nov 24 00:10:32.276346 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Nov 24 00:10:32.296384 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 00:10:32.302813 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 00:10:32.306070 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 00:10:32.318463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 00:10:32.331333 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 00:10:32.349106 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 00:10:32.412922 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 00:10:32.427232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 00:10:32.468883 systemd-tmpfiles[1570]: ACLs are not supported, ignoring. Nov 24 00:10:32.469349 systemd-tmpfiles[1570]: ACLs are not supported, ignoring. Nov 24 00:10:32.478953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 00:10:32.484614 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 00:10:32.521918 kernel: loop1: detected capacity change from 0 to 128560 Nov 24 00:10:32.659951 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 00:10:32.668575 kernel: loop2: detected capacity change from 0 to 110984 Nov 24 00:10:32.821582 kernel: loop3: detected capacity change from 0 to 72368 Nov 24 00:10:32.991540 kernel: loop4: detected capacity change from 0 to 224512 Nov 24 00:10:33.017581 kernel: loop5: detected capacity change from 0 to 128560 Nov 24 00:10:33.071024 kernel: loop6: detected capacity change from 0 to 110984 Nov 24 00:10:33.106583 kernel: loop7: detected capacity change from 0 to 72368 Nov 24 00:10:33.113940 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 00:10:33.118730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 00:10:33.133214 (sd-merge)[1578]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 24 00:10:33.134285 (sd-merge)[1578]: Merged extensions into '/usr'. Nov 24 00:10:33.142688 systemd[1]: Reload requested from client PID 1551 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 00:10:33.142876 systemd[1]: Reloading... Nov 24 00:10:33.165467 systemd-udevd[1580]: Using default interface naming scheme 'v255'. Nov 24 00:10:33.252591 zram_generator::config[1605]: No configuration found. Nov 24 00:10:33.605993 (udev-worker)[1612]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:10:33.654572 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 00:10:33.659576 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 24 00:10:33.690567 kernel: ACPI: button: Power Button [PWRF] Nov 24 00:10:33.690663 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Nov 24 00:10:33.692838 kernel: ACPI: button: Sleep Button [SLPF] Nov 24 00:10:33.866197 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 24 00:10:33.866113 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 00:10:33.867518 systemd[1]: Reloading finished in 723 ms. Nov 24 00:10:33.887582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 00:10:33.897679 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 00:10:33.899704 ldconfig[1546]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 00:10:33.903649 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 00:10:33.927810 systemd[1]: Starting ensure-sysext.service... Nov 24 00:10:33.932813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 00:10:33.935902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 00:10:33.979514 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 00:10:33.986201 systemd[1]: Reload requested from client PID 1745 ('systemctl') (unit ensure-sysext.service)... Nov 24 00:10:33.986226 systemd[1]: Reloading... Nov 24 00:10:33.993468 systemd-tmpfiles[1747]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 00:10:33.993925 systemd-tmpfiles[1747]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 00:10:33.994416 systemd-tmpfiles[1747]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 00:10:33.996019 systemd-tmpfiles[1747]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 00:10:33.997382 systemd-tmpfiles[1747]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 00:10:33.997842 systemd-tmpfiles[1747]: ACLs are not supported, ignoring. Nov 24 00:10:33.997923 systemd-tmpfiles[1747]: ACLs are not supported, ignoring. Nov 24 00:10:34.004241 systemd-tmpfiles[1747]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:10:34.004402 systemd-tmpfiles[1747]: Skipping /boot Nov 24 00:10:34.018212 systemd-tmpfiles[1747]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 00:10:34.018378 systemd-tmpfiles[1747]: Skipping /boot Nov 24 00:10:34.180583 zram_generator::config[1814]: No configuration found. Nov 24 00:10:34.510027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 24 00:10:34.511223 systemd[1]: Reloading finished in 523 ms. Nov 24 00:10:34.522991 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 00:10:34.543823 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 00:10:34.615034 systemd[1]: Finished ensure-sysext.service. Nov 24 00:10:34.628793 systemd-networkd[1746]: lo: Link UP Nov 24 00:10:34.629172 systemd-networkd[1746]: lo: Gained carrier Nov 24 00:10:34.630176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:10:34.631281 systemd-networkd[1746]: Enumeration completed Nov 24 00:10:34.631943 systemd-networkd[1746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:10:34.632060 systemd-networkd[1746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 00:10:34.633745 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:10:34.634925 systemd-networkd[1746]: eth0: Link UP Nov 24 00:10:34.635222 systemd-networkd[1746]: eth0: Gained carrier Nov 24 00:10:34.635354 systemd-networkd[1746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 00:10:34.638347 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 00:10:34.639396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 00:10:34.645649 systemd-networkd[1746]: eth0: DHCPv4 address 172.31.17.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 24 00:10:34.646114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 00:10:34.651851 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 00:10:34.654967 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 00:10:34.658704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 00:10:34.659984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 00:10:34.665841 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 00:10:34.672421 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 00:10:34.681238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 00:10:34.685928 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 00:10:34.686762 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 00:10:34.692574 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 00:10:34.700504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 00:10:34.701739 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 00:10:34.702989 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 00:10:34.705412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 00:10:34.707688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 00:10:34.715463 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 00:10:34.716872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 00:10:34.717930 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 00:10:34.718165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 00:10:34.725094 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 00:10:34.726751 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 00:10:34.736088 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 00:10:34.745078 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 00:10:34.745719 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 00:10:34.745808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 00:10:34.751378 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 00:10:34.788763 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 00:10:34.805351 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 00:10:34.806488 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 00:10:34.812696 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 00:10:34.813330 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 00:10:34.821882 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 00:10:34.838033 augenrules[1925]: No rules Nov 24 00:10:34.840026 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:10:34.840346 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:10:34.842112 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 00:10:34.880336 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 00:10:34.895635 systemd-resolved[1891]: Positive Trust Anchors: Nov 24 00:10:34.895653 systemd-resolved[1891]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 00:10:34.895702 systemd-resolved[1891]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 00:10:34.901085 systemd-resolved[1891]: Defaulting to hostname 'linux'. Nov 24 00:10:34.903188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 00:10:34.903831 systemd[1]: Reached target network.target - Network. Nov 24 00:10:34.904659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 00:10:34.905081 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 00:10:34.905602 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 00:10:34.906009 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 00:10:34.906405 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 00:10:34.906946 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 00:10:34.907432 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 00:10:34.907828 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 00:10:34.908382 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 00:10:34.908429 systemd[1]: Reached target paths.target - Path Units. Nov 24 00:10:34.908958 systemd[1]: Reached target timers.target - Timer Units. Nov 24 00:10:34.911015 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 00:10:34.913293 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 00:10:34.915815 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 00:10:34.916887 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 00:10:34.917283 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 00:10:34.934739 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 00:10:34.937447 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 00:10:34.938879 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 00:10:34.940399 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 00:10:34.941238 systemd[1]: Reached target basic.target - Basic System. Nov 24 00:10:34.941828 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:10:34.941867 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 00:10:34.943075 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 00:10:34.948758 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 00:10:34.951727 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 00:10:34.955735 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 00:10:34.959805 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 00:10:34.962939 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 00:10:34.963813 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 00:10:34.967866 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 00:10:34.972844 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 00:10:34.977997 systemd[1]: Started ntpd.service - Network Time Service. Nov 24 00:10:35.006728 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 00:10:35.008402 jq[1941]: false Nov 24 00:10:35.011768 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 24 00:10:35.021798 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 00:10:35.033727 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 00:10:35.044837 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 00:10:35.047680 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 00:10:35.048454 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 00:10:35.056315 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 00:10:35.065746 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 00:10:35.072101 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 00:10:35.074183 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 00:10:35.074461 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 00:10:35.079233 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 00:10:35.079512 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 00:10:35.088187 oslogin_cache_refresh[1943]: Refreshing passwd entry cache Nov 24 00:10:35.089050 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Refreshing passwd entry cache Nov 24 00:10:35.106584 jq[1957]: true Nov 24 00:10:35.122589 extend-filesystems[1942]: Found /dev/nvme0n1p6 Nov 24 00:10:35.131601 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Failure getting users, quitting Nov 24 00:10:35.131601 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:10:35.131601 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Refreshing group entry cache Nov 24 00:10:35.128488 oslogin_cache_refresh[1943]: Failure getting users, quitting Nov 24 00:10:35.128511 oslogin_cache_refresh[1943]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 00:10:35.128584 oslogin_cache_refresh[1943]: Refreshing group entry cache Nov 24 00:10:35.144966 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Failure getting groups, quitting Nov 24 00:10:35.144966 google_oslogin_nss_cache[1943]: oslogin_cache_refresh[1943]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:10:35.145130 update_engine[1956]: I20251124 00:10:35.139672 1956 main.cc:92] Flatcar Update Engine starting Nov 24 00:10:35.135698 oslogin_cache_refresh[1943]: Failure getting groups, quitting Nov 24 00:10:35.135716 oslogin_cache_refresh[1943]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 00:10:35.152997 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 00:10:35.154805 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 00:10:35.163177 extend-filesystems[1942]: Found /dev/nvme0n1p9 Nov 24 00:10:35.182356 extend-filesystems[1942]: Checking size of /dev/nvme0n1p9 Nov 24 00:10:35.182646 ntpd[1945]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:13:58 UTC 2025 (1): Starting Nov 24 00:10:35.183332 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:13:58 UTC 2025 (1): Starting Nov 24 00:10:35.183332 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:10:35.183332 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: ---------------------------------------------------- Nov 24 00:10:35.183332 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:10:35.183332 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:10:35.183332 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: corporation. Support and training for ntp-4 are Nov 24 00:10:35.183332 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: available at https://www.nwtime.org/support Nov 24 00:10:35.183332 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: ---------------------------------------------------- Nov 24 00:10:35.182715 ntpd[1945]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:10:35.182725 ntpd[1945]: ---------------------------------------------------- Nov 24 00:10:35.182734 ntpd[1945]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:10:35.182743 ntpd[1945]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:10:35.182752 ntpd[1945]: corporation. Support and training for ntp-4 are Nov 24 00:10:35.182762 ntpd[1945]: available at https://www.nwtime.org/support Nov 24 00:10:35.182772 ntpd[1945]: ---------------------------------------------------- Nov 24 00:10:35.188122 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 00:10:35.189700 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 00:10:35.193338 ntpd[1945]: proto: precision = 0.063 usec (-24) Nov 24 00:10:35.194684 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: proto: precision = 0.063 usec (-24) Nov 24 00:10:35.202782 ntpd[1945]: basedate set to 2025-11-11 Nov 24 00:10:35.202812 ntpd[1945]: gps base set to 2025-11-16 (week 2393) Nov 24 00:10:35.202966 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: basedate set to 2025-11-11 Nov 24 00:10:35.202966 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: gps base set to 2025-11-16 (week 2393) Nov 24 00:10:35.202966 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:10:35.202955 ntpd[1945]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:10:35.203148 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:10:35.202986 ntpd[1945]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:10:35.203212 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:10:35.203271 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:10:35.203271 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: Listen normally on 3 eth0 172.31.17.28:123 Nov 24 00:10:35.203249 ntpd[1945]: Listen normally on 3 eth0 172.31.17.28:123 Nov 24 00:10:35.203388 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: Listen normally on 4 lo [::1]:123 Nov 24 00:10:35.203388 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: bind(21) AF_INET6 [fe80::440:1cff:fe45:4c41%2]:123 flags 0x811 failed: Cannot assign requested address Nov 24 00:10:35.203388 ntpd[1945]: 24 Nov 00:10:35 ntpd[1945]: unable to create socket on eth0 (5) for [fe80::440:1cff:fe45:4c41%2]:123 Nov 24 00:10:35.203277 ntpd[1945]: Listen normally on 4 lo [::1]:123 Nov 24 00:10:35.203307 ntpd[1945]: bind(21) AF_INET6 [fe80::440:1cff:fe45:4c41%2]:123 flags 0x811 failed: Cannot assign requested address Nov 24 00:10:35.203328 ntpd[1945]: unable to create socket on eth0 (5) for [fe80::440:1cff:fe45:4c41%2]:123 Nov 24 00:10:35.204302 kernel: ntpd[1945]: segfault at 24 ip 0000559c47446aeb sp 00007ffcdfdbefd0 error 4 in ntpd[68aeb,559c473e4000+80000] likely on CPU 1 (core 0, socket 0) Nov 24 00:10:35.207475 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Nov 24 00:10:35.212888 tar[1959]: linux-amd64/LICENSE Nov 24 00:10:35.220269 tar[1959]: linux-amd64/helm Nov 24 00:10:35.220378 jq[1969]: true Nov 24 00:10:35.223147 dbus-daemon[1939]: [system] SELinux support is enabled Nov 24 00:10:35.223386 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 00:10:35.228771 systemd-coredump[1992]: Process 1945 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 24 00:10:35.231071 (ntainerd)[1981]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 00:10:35.258696 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 24 00:10:35.259444 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 00:10:35.259484 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 00:10:35.262455 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 00:10:35.265666 dbus-daemon[1939]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1746 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 24 00:10:35.262486 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 00:10:35.268826 systemd[1]: Started systemd-coredump@0-1992-0.service - Process Core Dump (PID 1992/UID 0). Nov 24 00:10:35.276375 systemd-logind[1955]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 00:10:35.280089 update_engine[1956]: I20251124 00:10:35.278790 1956 update_check_scheduler.cc:74] Next update check in 7m26s Nov 24 00:10:35.276404 systemd-logind[1955]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 24 00:10:35.276428 systemd-logind[1955]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 00:10:35.277750 systemd-logind[1955]: New seat seat0. Nov 24 00:10:35.279126 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 00:10:35.284736 systemd[1]: Started update-engine.service - Update Engine. Nov 24 00:10:35.297140 extend-filesystems[1942]: Resized partition /dev/nvme0n1p9 Nov 24 00:10:35.298301 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 24 00:10:35.300464 coreos-metadata[1938]: Nov 24 00:10:35.298 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 24 00:10:35.310470 coreos-metadata[1938]: Nov 24 00:10:35.306 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 24 00:10:35.310470 coreos-metadata[1938]: Nov 24 00:10:35.308 INFO Fetch successful Nov 24 00:10:35.310470 coreos-metadata[1938]: Nov 24 00:10:35.308 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 24 00:10:35.314259 extend-filesystems[2009]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 00:10:35.318032 coreos-metadata[1938]: Nov 24 00:10:35.312 INFO Fetch successful Nov 24 00:10:35.318032 coreos-metadata[1938]: Nov 24 00:10:35.312 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 24 00:10:35.318032 coreos-metadata[1938]: Nov 24 00:10:35.316 INFO Fetch successful Nov 24 00:10:35.318032 coreos-metadata[1938]: Nov 24 00:10:35.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 24 00:10:35.323179 coreos-metadata[1938]: Nov 24 00:10:35.321 INFO Fetch successful Nov 24 00:10:35.323179 coreos-metadata[1938]: Nov 24 00:10:35.321 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 24 00:10:35.328540 coreos-metadata[1938]: Nov 24 00:10:35.323 INFO Fetch failed with 404: resource not found Nov 24 00:10:35.328540 coreos-metadata[1938]: Nov 24 00:10:35.328 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 24 00:10:35.329875 coreos-metadata[1938]: Nov 24 00:10:35.329 INFO Fetch successful Nov 24 00:10:35.329875 coreos-metadata[1938]: Nov 24 00:10:35.329 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 24 00:10:35.332116 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 24 00:10:35.334167 coreos-metadata[1938]: Nov 24 00:10:35.333 INFO Fetch successful Nov 24 00:10:35.334167 coreos-metadata[1938]: Nov 24 00:10:35.334 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 24 00:10:35.337431 coreos-metadata[1938]: Nov 24 00:10:35.337 INFO Fetch successful Nov 24 00:10:35.337431 coreos-metadata[1938]: Nov 24 00:10:35.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 24 00:10:35.341585 coreos-metadata[1938]: Nov 24 00:10:35.341 INFO Fetch successful Nov 24 00:10:35.341585 coreos-metadata[1938]: Nov 24 00:10:35.341 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 24 00:10:35.342933 coreos-metadata[1938]: Nov 24 00:10:35.342 INFO Fetch successful Nov 24 00:10:35.346846 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 00:10:35.359931 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 24 00:10:35.365706 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 24 00:10:35.471325 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 00:10:35.475389 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 00:10:35.527584 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 24 00:10:35.559374 bash[2026]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:10:35.560761 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 00:10:35.566633 extend-filesystems[2009]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 24 00:10:35.566633 extend-filesystems[2009]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 24 00:10:35.566633 extend-filesystems[2009]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 24 00:10:35.570782 extend-filesystems[1942]: Resized filesystem in /dev/nvme0n1p9 Nov 24 00:10:35.570345 systemd[1]: Starting sshkeys.service... Nov 24 00:10:35.576415 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 00:10:35.576737 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 00:10:35.660047 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 24 00:10:35.665450 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 24 00:10:35.815735 coreos-metadata[2098]: Nov 24 00:10:35.815 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 24 00:10:35.818006 coreos-metadata[2098]: Nov 24 00:10:35.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 24 00:10:35.818006 coreos-metadata[2098]: Nov 24 00:10:35.817 INFO Fetch successful Nov 24 00:10:35.818006 coreos-metadata[2098]: Nov 24 00:10:35.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 24 00:10:35.820662 coreos-metadata[2098]: Nov 24 00:10:35.819 INFO Fetch successful Nov 24 00:10:35.823705 unknown[2098]: wrote ssh authorized keys file for user: core Nov 24 00:10:35.856527 sshd_keygen[1989]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 00:10:35.887855 systemd-coredump[1999]: Process 1945 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1945: #0 0x0000559c47446aeb n/a (ntpd + 0x68aeb) #1 0x0000559c473efcdf n/a (ntpd + 0x11cdf) #2 0x0000559c473f0575 n/a (ntpd + 0x12575) #3 0x0000559c473ebd8a n/a (ntpd + 0xdd8a) #4 0x0000559c473ed5d3 n/a (ntpd + 0xf5d3) #5 0x0000559c473f5fd1 n/a (ntpd + 0x17fd1) #6 0x0000559c473e6c2d n/a (ntpd + 0x8c2d) #7 0x00007f353c5a916c n/a (libc.so.6 + 0x2716c) #8 0x00007f353c5a9229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000559c473e6c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Nov 24 00:10:35.895056 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 24 00:10:35.896357 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 24 00:10:35.922967 update-ssh-keys[2120]: Updated "/home/core/.ssh/authorized_keys" Nov 24 00:10:35.924179 systemd[1]: systemd-coredump@0-1992-0.service: Deactivated successfully. Nov 24 00:10:35.935812 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 24 00:10:35.956739 systemd[1]: Finished sshkeys.service. Nov 24 00:10:35.987177 systemd-networkd[1746]: eth0: Gained IPv6LL Nov 24 00:10:35.997745 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 00:10:36.000781 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 24 00:10:36.001089 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 00:10:36.009967 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 24 00:10:36.019805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:10:36.023922 systemd[1]: Started ntpd.service - Network Time Service. Nov 24 00:10:36.033514 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 00:10:36.091340 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 00:10:36.094407 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 24 00:10:36.097289 ntpd[2152]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:13:58 UTC 2025 (1): Starting Nov 24 00:10:36.098880 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:13:58 UTC 2025 (1): Starting Nov 24 00:10:36.099180 ntpd[2152]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:10:36.099443 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 24 00:10:36.099509 ntpd[2152]: ---------------------------------------------------- Nov 24 00:10:36.099738 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: ---------------------------------------------------- Nov 24 00:10:36.099809 ntpd[2152]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:10:36.099884 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: ntp-4 is maintained by Network Time Foundation, Nov 24 00:10:36.099929 ntpd[2152]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:10:36.099987 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 24 00:10:36.100039 ntpd[2152]: corporation. Support and training for ntp-4 are Nov 24 00:10:36.100102 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: corporation. Support and training for ntp-4 are Nov 24 00:10:36.100152 ntpd[2152]: available at https://www.nwtime.org/support Nov 24 00:10:36.100215 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: available at https://www.nwtime.org/support Nov 24 00:10:36.100266 ntpd[2152]: ---------------------------------------------------- Nov 24 00:10:36.100327 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: ---------------------------------------------------- Nov 24 00:10:36.103036 ntpd[2152]: proto: precision = 0.098 usec (-23) Nov 24 00:10:36.104038 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: proto: precision = 0.098 usec (-23) Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: basedate set to 2025-11-11 Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: gps base set to 2025-11-16 (week 2393) Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: Listen normally on 3 eth0 172.31.17.28:123 Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: Listen normally on 4 lo [::1]:123 Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: Listen normally on 5 eth0 [fe80::440:1cff:fe45:4c41%2]:123 Nov 24 00:10:36.105361 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: Listening on routing socket on fd #22 for interface updates Nov 24 00:10:36.103299 ntpd[2152]: basedate set to 2025-11-11 Nov 24 00:10:36.103312 ntpd[2152]: gps base set to 2025-11-16 (week 2393) Nov 24 00:10:36.103410 ntpd[2152]: Listen and drop on 0 v6wildcard [::]:123 Nov 24 00:10:36.103440 ntpd[2152]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 24 00:10:36.103650 ntpd[2152]: Listen normally on 2 lo 127.0.0.1:123 Nov 24 00:10:36.103679 ntpd[2152]: Listen normally on 3 eth0 172.31.17.28:123 Nov 24 00:10:36.103711 ntpd[2152]: Listen normally on 4 lo [::1]:123 Nov 24 00:10:36.103746 ntpd[2152]: Listen normally on 5 eth0 [fe80::440:1cff:fe45:4c41%2]:123 Nov 24 00:10:36.103772 ntpd[2152]: Listening on routing socket on fd #22 for interface updates Nov 24 00:10:36.115610 ntpd[2152]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:10:36.117091 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:10:36.117091 ntpd[2152]: 24 Nov 00:10:36 ntpd[2152]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:10:36.115647 ntpd[2152]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 24 00:10:36.124815 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 24 00:10:36.127429 dbus-daemon[1939]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2019 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 24 00:10:36.140232 locksmithd[2006]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 00:10:36.142154 systemd[1]: Starting polkit.service - Authorization Manager... Nov 24 00:10:36.150586 containerd[1981]: time="2025-11-24T00:10:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 00:10:36.159380 containerd[1981]: time="2025-11-24T00:10:36.159321843Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 00:10:36.181531 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 00:10:36.189468 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 00:10:36.189954 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 00:10:36.195123 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 00:10:36.230369 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.234755664Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.544µs" Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.234804206Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.234830618Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.235025498Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.235049436Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.235078237Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.235146719Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.235160983Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.235433940Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.235456560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.235571513Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 00:10:36.235882 containerd[1981]: time="2025-11-24T00:10:36.235592326Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 00:10:36.236372 containerd[1981]: time="2025-11-24T00:10:36.235707352Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 00:10:36.236372 containerd[1981]: time="2025-11-24T00:10:36.235974357Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:10:36.236372 containerd[1981]: time="2025-11-24T00:10:36.236016493Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 00:10:36.236372 containerd[1981]: time="2025-11-24T00:10:36.236039462Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 00:10:36.236372 containerd[1981]: time="2025-11-24T00:10:36.236076948Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 00:10:36.236536 containerd[1981]: time="2025-11-24T00:10:36.236396818Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 00:10:36.236536 containerd[1981]: time="2025-11-24T00:10:36.236465018Z" level=info msg="metadata content store policy set" policy=shared Nov 24 00:10:36.240433 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 00:10:36.242520 containerd[1981]: time="2025-11-24T00:10:36.241727671Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 00:10:36.242520 containerd[1981]: time="2025-11-24T00:10:36.241808688Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 00:10:36.242520 containerd[1981]: time="2025-11-24T00:10:36.241830568Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 00:10:36.242520 containerd[1981]: time="2025-11-24T00:10:36.241888270Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 00:10:36.247116 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.248625721Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.248755117Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.248795696Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.248824071Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.248852470Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.248872822Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.248897757Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.248922308Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.249098063Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.249140441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.249169307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.249187025Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.249207862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 00:10:36.251277 containerd[1981]: time="2025-11-24T00:10:36.249228270Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 00:10:36.249909 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 00:10:36.251889 containerd[1981]: time="2025-11-24T00:10:36.249250022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 00:10:36.251889 containerd[1981]: time="2025-11-24T00:10:36.249265620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 00:10:36.251889 containerd[1981]: time="2025-11-24T00:10:36.249289454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 00:10:36.251889 containerd[1981]: time="2025-11-24T00:10:36.249313462Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 00:10:36.251889 containerd[1981]: time="2025-11-24T00:10:36.249337781Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 00:10:36.251889 containerd[1981]: time="2025-11-24T00:10:36.249417453Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 00:10:36.251889 containerd[1981]: time="2025-11-24T00:10:36.249447246Z" level=info msg="Start snapshots syncer" Nov 24 00:10:36.251889 containerd[1981]: time="2025-11-24T00:10:36.249475663Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 00:10:36.268329 containerd[1981]: time="2025-11-24T00:10:36.266770415Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 00:10:36.268329 containerd[1981]: time="2025-11-24T00:10:36.268055591Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273352596Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273641051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273681196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273699443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273716524Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273736174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273754449Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273770231Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273808044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273825340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.273842615Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.274249023Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.274673788Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 00:10:36.275574 containerd[1981]: time="2025-11-24T00:10:36.274701500Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:10:36.276187 containerd[1981]: time="2025-11-24T00:10:36.274725227Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 00:10:36.276187 containerd[1981]: time="2025-11-24T00:10:36.274741286Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 00:10:36.276187 containerd[1981]: time="2025-11-24T00:10:36.274767565Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 00:10:36.276187 containerd[1981]: time="2025-11-24T00:10:36.274793144Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 00:10:36.276187 containerd[1981]: time="2025-11-24T00:10:36.274823515Z" level=info msg="runtime interface created" Nov 24 00:10:36.276187 containerd[1981]: time="2025-11-24T00:10:36.274836275Z" level=info msg="created NRI interface" Nov 24 00:10:36.276187 containerd[1981]: time="2025-11-24T00:10:36.274851873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 00:10:36.276187 containerd[1981]: time="2025-11-24T00:10:36.274878827Z" level=info msg="Connect containerd service" Nov 24 00:10:36.276187 containerd[1981]: time="2025-11-24T00:10:36.274924149Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 00:10:36.277019 containerd[1981]: time="2025-11-24T00:10:36.276971682Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 00:10:36.337607 amazon-ssm-agent[2150]: Initializing new seelog logger Nov 24 00:10:36.338178 amazon-ssm-agent[2150]: New Seelog Logger Creation Complete Nov 24 00:10:36.338318 amazon-ssm-agent[2150]: 2025/11/24 00:10:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:36.338380 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:36.339007 amazon-ssm-agent[2150]: 2025/11/24 00:10:36 processing appconfig overrides Nov 24 00:10:36.339464 amazon-ssm-agent[2150]: 2025/11/24 00:10:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:36.339573 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:36.339727 amazon-ssm-agent[2150]: 2025/11/24 00:10:36 processing appconfig overrides Nov 24 00:10:36.340113 amazon-ssm-agent[2150]: 2025/11/24 00:10:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:36.340179 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:36.340314 amazon-ssm-agent[2150]: 2025/11/24 00:10:36 processing appconfig overrides Nov 24 00:10:36.341770 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.3393 INFO Proxy environment variables: Nov 24 00:10:36.346503 amazon-ssm-agent[2150]: 2025/11/24 00:10:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:36.346886 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:36.349181 amazon-ssm-agent[2150]: 2025/11/24 00:10:36 processing appconfig overrides Nov 24 00:10:36.439879 polkitd[2172]: Started polkitd version 126 Nov 24 00:10:36.441831 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.3394 INFO https_proxy: Nov 24 00:10:36.450197 polkitd[2172]: Loading rules from directory /etc/polkit-1/rules.d Nov 24 00:10:36.461596 polkitd[2172]: Loading rules from directory /run/polkit-1/rules.d Nov 24 00:10:36.461681 polkitd[2172]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:10:36.462137 polkitd[2172]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 24 00:10:36.462184 polkitd[2172]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 24 00:10:36.462244 polkitd[2172]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 24 00:10:36.464912 polkitd[2172]: Finished loading, compiling and executing 2 rules Nov 24 00:10:36.465544 systemd[1]: Started polkit.service - Authorization Manager. Nov 24 00:10:36.468264 dbus-daemon[1939]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 24 00:10:36.469192 polkitd[2172]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 24 00:10:36.491630 systemd-hostnamed[2019]: Hostname set to (transient) Nov 24 00:10:36.491753 systemd-resolved[1891]: System hostname changed to 'ip-172-31-17-28'. Nov 24 00:10:36.541934 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.3394 INFO http_proxy: Nov 24 00:10:36.645797 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.3394 INFO no_proxy: Nov 24 00:10:36.653159 containerd[1981]: time="2025-11-24T00:10:36.652765667Z" level=info msg="Start subscribing containerd event" Nov 24 00:10:36.653159 containerd[1981]: time="2025-11-24T00:10:36.652830390Z" level=info msg="Start recovering state" Nov 24 00:10:36.653159 containerd[1981]: time="2025-11-24T00:10:36.652960959Z" level=info msg="Start event monitor" Nov 24 00:10:36.653159 containerd[1981]: time="2025-11-24T00:10:36.652977744Z" level=info msg="Start cni network conf syncer for default" Nov 24 00:10:36.653159 containerd[1981]: time="2025-11-24T00:10:36.652987854Z" level=info msg="Start streaming server" Nov 24 00:10:36.653159 containerd[1981]: time="2025-11-24T00:10:36.653006418Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 00:10:36.653159 containerd[1981]: time="2025-11-24T00:10:36.653018232Z" level=info msg="runtime interface starting up..." Nov 24 00:10:36.653159 containerd[1981]: time="2025-11-24T00:10:36.653027102Z" level=info msg="starting plugins..." Nov 24 00:10:36.653159 containerd[1981]: time="2025-11-24T00:10:36.653043072Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 00:10:36.658866 containerd[1981]: time="2025-11-24T00:10:36.656360857Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 00:10:36.658866 containerd[1981]: time="2025-11-24T00:10:36.656528745Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 00:10:36.659906 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 00:10:36.661481 containerd[1981]: time="2025-11-24T00:10:36.660715574Z" level=info msg="containerd successfully booted in 0.515185s" Nov 24 00:10:36.737517 tar[1959]: linux-amd64/README.md Nov 24 00:10:36.744574 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.3397 INFO Checking if agent identity type OnPrem can be assumed Nov 24 00:10:36.759679 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 00:10:36.842913 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.3399 INFO Checking if agent identity type EC2 can be assumed Nov 24 00:10:36.942407 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.4759 INFO Agent will take identity from EC2 Nov 24 00:10:37.041799 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.4787 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 24 00:10:37.044519 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 00:10:37.047632 systemd[1]: Started sshd@0-172.31.17.28:22-139.178.68.195:48710.service - OpenSSH per-connection server daemon (139.178.68.195:48710). Nov 24 00:10:37.062045 amazon-ssm-agent[2150]: 2025/11/24 00:10:37 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:37.062045 amazon-ssm-agent[2150]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 24 00:10:37.062291 amazon-ssm-agent[2150]: 2025/11/24 00:10:37 processing appconfig overrides Nov 24 00:10:37.126587 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.4787 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.4787 INFO [amazon-ssm-agent] Starting Core Agent Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.4787 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.4787 INFO [Registrar] Starting registrar module Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.4807 INFO [EC2Identity] Checking disk for registration info Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.4807 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:36.4807 INFO [EC2Identity] Generating registration keypair Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:37.0219 INFO [EC2Identity] Checking write access before registering Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:37.0224 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:37.0616 INFO [EC2Identity] EC2 registration was successful. Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:37.0616 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:37.0619 INFO [CredentialRefresher] credentialRefresher has started Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:37.0619 INFO [CredentialRefresher] Starting credentials refresher loop Nov 24 00:10:37.126746 amazon-ssm-agent[2150]: 2025-11-24 00:10:37.1259 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 24 00:10:37.127521 amazon-ssm-agent[2150]: 2025-11-24 00:10:37.1265 INFO [CredentialRefresher] Credentials ready Nov 24 00:10:37.140934 amazon-ssm-agent[2150]: 2025-11-24 00:10:37.1268 INFO [CredentialRefresher] Next credential rotation will be in 29.999985654783334 minutes Nov 24 00:10:37.333305 sshd[2216]: Accepted publickey for core from 139.178.68.195 port 48710 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:10:37.337083 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:10:37.348434 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 00:10:37.350729 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 00:10:37.365626 systemd-logind[1955]: New session 1 of user core. Nov 24 00:10:37.380898 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 00:10:37.385538 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 00:10:37.411193 (systemd)[2221]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 00:10:37.414291 systemd-logind[1955]: New session c1 of user core. Nov 24 00:10:37.601488 systemd[2221]: Queued start job for default target default.target. Nov 24 00:10:37.607794 systemd[2221]: Created slice app.slice - User Application Slice. Nov 24 00:10:37.608171 systemd[2221]: Reached target paths.target - Paths. Nov 24 00:10:37.608240 systemd[2221]: Reached target timers.target - Timers. Nov 24 00:10:37.610668 systemd[2221]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 00:10:37.624392 systemd[2221]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 00:10:37.624767 systemd[2221]: Reached target sockets.target - Sockets. Nov 24 00:10:37.624939 systemd[2221]: Reached target basic.target - Basic System. Nov 24 00:10:37.625090 systemd[2221]: Reached target default.target - Main User Target. Nov 24 00:10:37.625125 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 00:10:37.625284 systemd[2221]: Startup finished in 202ms. Nov 24 00:10:37.633793 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 00:10:37.786888 systemd[1]: Started sshd@1-172.31.17.28:22-139.178.68.195:48726.service - OpenSSH per-connection server daemon (139.178.68.195:48726). Nov 24 00:10:37.983954 sshd[2232]: Accepted publickey for core from 139.178.68.195 port 48726 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:10:37.989345 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:10:38.005675 systemd-logind[1955]: New session 2 of user core. Nov 24 00:10:38.012840 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 00:10:38.143928 amazon-ssm-agent[2150]: 2025-11-24 00:10:38.1433 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 24 00:10:38.154313 sshd[2235]: Connection closed by 139.178.68.195 port 48726 Nov 24 00:10:38.157156 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Nov 24 00:10:38.163257 systemd[1]: sshd@1-172.31.17.28:22-139.178.68.195:48726.service: Deactivated successfully. Nov 24 00:10:38.166330 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 00:10:38.168616 systemd-logind[1955]: Session 2 logged out. Waiting for processes to exit. Nov 24 00:10:38.171934 systemd-logind[1955]: Removed session 2. Nov 24 00:10:38.192123 systemd[1]: Started sshd@2-172.31.17.28:22-139.178.68.195:39322.service - OpenSSH per-connection server daemon (139.178.68.195:39322). Nov 24 00:10:38.245516 amazon-ssm-agent[2150]: 2025-11-24 00:10:38.1461 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2239) started Nov 24 00:10:38.345456 amazon-ssm-agent[2150]: 2025-11-24 00:10:38.1462 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 24 00:10:38.389839 sshd[2244]: Accepted publickey for core from 139.178.68.195 port 39322 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:10:38.393961 sshd-session[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:10:38.403307 systemd-logind[1955]: New session 3 of user core. Nov 24 00:10:38.406894 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 00:10:38.553388 sshd[2258]: Connection closed by 139.178.68.195 port 39322 Nov 24 00:10:38.554354 sshd-session[2244]: pam_unix(sshd:session): session closed for user core Nov 24 00:10:38.559026 systemd[1]: sshd@2-172.31.17.28:22-139.178.68.195:39322.service: Deactivated successfully. Nov 24 00:10:38.561859 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 00:10:38.563658 systemd-logind[1955]: Session 3 logged out. Waiting for processes to exit. Nov 24 00:10:38.564973 systemd-logind[1955]: Removed session 3. Nov 24 00:10:38.957382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:10:38.958856 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 00:10:38.961409 systemd[1]: Startup finished in 2.989s (kernel) + 7.894s (initrd) + 8.480s (userspace) = 19.364s. Nov 24 00:10:38.969991 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:10:40.373635 kubelet[2268]: E1124 00:10:40.373577 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:10:40.376816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:10:40.377027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:10:40.377623 systemd[1]: kubelet.service: Consumed 1.147s CPU time, 264M memory peak. Nov 24 00:10:45.185245 systemd-resolved[1891]: Clock change detected. Flushing caches. Nov 24 00:10:50.671186 systemd[1]: Started sshd@3-172.31.17.28:22-139.178.68.195:53062.service - OpenSSH per-connection server daemon (139.178.68.195:53062). Nov 24 00:10:50.843280 sshd[2280]: Accepted publickey for core from 139.178.68.195 port 53062 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:10:50.845075 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:10:50.851493 systemd-logind[1955]: New session 4 of user core. Nov 24 00:10:50.857115 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 00:10:50.978109 sshd[2283]: Connection closed by 139.178.68.195 port 53062 Nov 24 00:10:50.979018 sshd-session[2280]: pam_unix(sshd:session): session closed for user core Nov 24 00:10:50.983917 systemd[1]: sshd@3-172.31.17.28:22-139.178.68.195:53062.service: Deactivated successfully. Nov 24 00:10:50.986200 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 00:10:50.987243 systemd-logind[1955]: Session 4 logged out. Waiting for processes to exit. Nov 24 00:10:50.989196 systemd-logind[1955]: Removed session 4. Nov 24 00:10:51.010953 systemd[1]: Started sshd@4-172.31.17.28:22-139.178.68.195:53068.service - OpenSSH per-connection server daemon (139.178.68.195:53068). Nov 24 00:10:51.195408 sshd[2289]: Accepted publickey for core from 139.178.68.195 port 53068 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:10:51.197126 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:10:51.203289 systemd-logind[1955]: New session 5 of user core. Nov 24 00:10:51.210116 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 00:10:51.324800 sshd[2292]: Connection closed by 139.178.68.195 port 53068 Nov 24 00:10:51.329109 sshd-session[2289]: pam_unix(sshd:session): session closed for user core Nov 24 00:10:51.340692 systemd[1]: sshd@4-172.31.17.28:22-139.178.68.195:53068.service: Deactivated successfully. Nov 24 00:10:51.349134 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 00:10:51.351196 systemd-logind[1955]: Session 5 logged out. Waiting for processes to exit. Nov 24 00:10:51.370049 systemd[1]: Started sshd@5-172.31.17.28:22-139.178.68.195:53072.service - OpenSSH per-connection server daemon (139.178.68.195:53072). Nov 24 00:10:51.371689 systemd-logind[1955]: Removed session 5. Nov 24 00:10:51.549073 sshd[2298]: Accepted publickey for core from 139.178.68.195 port 53072 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:10:51.550500 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:10:51.555417 systemd-logind[1955]: New session 6 of user core. Nov 24 00:10:51.563173 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 00:10:51.686984 sshd[2301]: Connection closed by 139.178.68.195 port 53072 Nov 24 00:10:51.687868 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Nov 24 00:10:51.692536 systemd[1]: sshd@5-172.31.17.28:22-139.178.68.195:53072.service: Deactivated successfully. Nov 24 00:10:51.694533 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 00:10:51.695651 systemd-logind[1955]: Session 6 logged out. Waiting for processes to exit. Nov 24 00:10:51.697552 systemd-logind[1955]: Removed session 6. Nov 24 00:10:51.721587 systemd[1]: Started sshd@6-172.31.17.28:22-139.178.68.195:53086.service - OpenSSH per-connection server daemon (139.178.68.195:53086). Nov 24 00:10:51.897533 sshd[2307]: Accepted publickey for core from 139.178.68.195 port 53086 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:10:51.898858 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:10:51.904536 systemd-logind[1955]: New session 7 of user core. Nov 24 00:10:51.907025 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 00:10:52.058094 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 00:10:52.058364 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:10:52.074356 sudo[2311]: pam_unix(sudo:session): session closed for user root Nov 24 00:10:52.097568 sshd[2310]: Connection closed by 139.178.68.195 port 53086 Nov 24 00:10:52.098294 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Nov 24 00:10:52.102743 systemd[1]: sshd@6-172.31.17.28:22-139.178.68.195:53086.service: Deactivated successfully. Nov 24 00:10:52.105645 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 00:10:52.107473 systemd-logind[1955]: Session 7 logged out. Waiting for processes to exit. Nov 24 00:10:52.109323 systemd-logind[1955]: Removed session 7. Nov 24 00:10:52.133864 systemd[1]: Started sshd@7-172.31.17.28:22-139.178.68.195:53100.service - OpenSSH per-connection server daemon (139.178.68.195:53100). Nov 24 00:10:52.312870 sshd[2317]: Accepted publickey for core from 139.178.68.195 port 53100 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:10:52.314155 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:10:52.320485 systemd-logind[1955]: New session 8 of user core. Nov 24 00:10:52.326113 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 00:10:52.428493 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 00:10:52.428940 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:10:52.434635 sudo[2322]: pam_unix(sudo:session): session closed for user root Nov 24 00:10:52.440508 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 00:10:52.440781 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:10:52.451677 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 00:10:52.467598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 00:10:52.471077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:10:52.505410 augenrules[2347]: No rules Nov 24 00:10:52.506787 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 00:10:52.507237 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 00:10:52.510018 sudo[2321]: pam_unix(sudo:session): session closed for user root Nov 24 00:10:52.534323 sshd[2320]: Connection closed by 139.178.68.195 port 53100 Nov 24 00:10:52.535109 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Nov 24 00:10:52.539782 systemd[1]: sshd@7-172.31.17.28:22-139.178.68.195:53100.service: Deactivated successfully. Nov 24 00:10:52.541711 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 00:10:52.544710 systemd-logind[1955]: Session 8 logged out. Waiting for processes to exit. Nov 24 00:10:52.546040 systemd-logind[1955]: Removed session 8. Nov 24 00:10:52.566793 systemd[1]: Started sshd@8-172.31.17.28:22-139.178.68.195:53116.service - OpenSSH per-connection server daemon (139.178.68.195:53116). Nov 24 00:10:52.705445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:10:52.715405 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:10:52.747423 sshd[2356]: Accepted publickey for core from 139.178.68.195 port 53116 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:10:52.749382 sshd-session[2356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:10:52.759419 systemd-logind[1955]: New session 9 of user core. Nov 24 00:10:52.764319 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 00:10:52.778165 kubelet[2364]: E1124 00:10:52.778128 2364 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:10:52.782479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:10:52.782661 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:10:52.783083 systemd[1]: kubelet.service: Consumed 190ms CPU time, 110M memory peak. Nov 24 00:10:52.864672 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 00:10:52.865061 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 00:10:53.544583 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 00:10:53.561049 (dockerd)[2390]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 00:10:54.214828 dockerd[2390]: time="2025-11-24T00:10:54.214753311Z" level=info msg="Starting up" Nov 24 00:10:54.216884 dockerd[2390]: time="2025-11-24T00:10:54.216494982Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 00:10:54.232032 dockerd[2390]: time="2025-11-24T00:10:54.231982754Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 00:10:54.295576 dockerd[2390]: time="2025-11-24T00:10:54.295452097Z" level=info msg="Loading containers: start." Nov 24 00:10:54.309880 kernel: Initializing XFRM netlink socket Nov 24 00:10:54.591232 (udev-worker)[2412]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:10:54.641656 systemd-networkd[1746]: docker0: Link UP Nov 24 00:10:54.647298 dockerd[2390]: time="2025-11-24T00:10:54.647230076Z" level=info msg="Loading containers: done." Nov 24 00:10:54.666221 dockerd[2390]: time="2025-11-24T00:10:54.666172518Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 00:10:54.666494 dockerd[2390]: time="2025-11-24T00:10:54.666279786Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 00:10:54.666494 dockerd[2390]: time="2025-11-24T00:10:54.666418079Z" level=info msg="Initializing buildkit" Nov 24 00:10:54.695473 dockerd[2390]: time="2025-11-24T00:10:54.695418329Z" level=info msg="Completed buildkit initialization" Nov 24 00:10:54.704520 dockerd[2390]: time="2025-11-24T00:10:54.704468336Z" level=info msg="Daemon has completed initialization" Nov 24 00:10:54.704866 dockerd[2390]: time="2025-11-24T00:10:54.704529259Z" level=info msg="API listen on /run/docker.sock" Nov 24 00:10:54.705001 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 00:10:56.266861 containerd[1981]: time="2025-11-24T00:10:56.266804073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 24 00:10:56.805725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4136277688.mount: Deactivated successfully. Nov 24 00:10:58.951174 containerd[1981]: time="2025-11-24T00:10:58.951108103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:10:58.954136 containerd[1981]: time="2025-11-24T00:10:58.953733873Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072183" Nov 24 00:10:58.957442 containerd[1981]: time="2025-11-24T00:10:58.957394878Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:10:58.966781 containerd[1981]: time="2025-11-24T00:10:58.965838986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:10:58.968308 containerd[1981]: time="2025-11-24T00:10:58.968258587Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 2.701407087s" Nov 24 00:10:58.968446 containerd[1981]: time="2025-11-24T00:10:58.968314723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Nov 24 00:10:58.969749 containerd[1981]: time="2025-11-24T00:10:58.969715709Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 24 00:11:01.715817 containerd[1981]: time="2025-11-24T00:11:01.715761424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:01.717180 containerd[1981]: time="2025-11-24T00:11:01.716990076Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992010" Nov 24 00:11:01.718773 containerd[1981]: time="2025-11-24T00:11:01.718733882Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:01.725069 containerd[1981]: time="2025-11-24T00:11:01.725017289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:01.728863 containerd[1981]: time="2025-11-24T00:11:01.727311433Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 2.757414558s" Nov 24 00:11:01.728863 containerd[1981]: time="2025-11-24T00:11:01.727361240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Nov 24 00:11:01.729158 containerd[1981]: time="2025-11-24T00:11:01.729100631Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 24 00:11:02.879577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 24 00:11:02.893487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:11:03.593035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:11:03.605679 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:11:03.704482 kubelet[2675]: E1124 00:11:03.704336 2675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:11:03.708855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:11:03.709049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:11:03.710121 systemd[1]: kubelet.service: Consumed 233ms CPU time, 108.4M memory peak. Nov 24 00:11:04.162285 containerd[1981]: time="2025-11-24T00:11:04.162222766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:04.163542 containerd[1981]: time="2025-11-24T00:11:04.163493972Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404248" Nov 24 00:11:04.166525 containerd[1981]: time="2025-11-24T00:11:04.166296670Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:04.182273 containerd[1981]: time="2025-11-24T00:11:04.182195705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:04.187948 containerd[1981]: time="2025-11-24T00:11:04.187895341Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 2.458737082s" Nov 24 00:11:04.189294 containerd[1981]: time="2025-11-24T00:11:04.188109840Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Nov 24 00:11:04.189861 containerd[1981]: time="2025-11-24T00:11:04.189809887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 24 00:11:05.570250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177746914.mount: Deactivated successfully. Nov 24 00:11:06.364434 containerd[1981]: time="2025-11-24T00:11:06.364348356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:06.371117 containerd[1981]: time="2025-11-24T00:11:06.370790803Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161423" Nov 24 00:11:06.381858 containerd[1981]: time="2025-11-24T00:11:06.381784219Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:06.401231 containerd[1981]: time="2025-11-24T00:11:06.401173352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:06.404598 containerd[1981]: time="2025-11-24T00:11:06.404408731Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 2.212859678s" Nov 24 00:11:06.404598 containerd[1981]: time="2025-11-24T00:11:06.404462603Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Nov 24 00:11:06.405586 containerd[1981]: time="2025-11-24T00:11:06.405446169Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 24 00:11:07.087707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2558242404.mount: Deactivated successfully. Nov 24 00:11:08.510107 containerd[1981]: time="2025-11-24T00:11:08.507646991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:08.510107 containerd[1981]: time="2025-11-24T00:11:08.510062175Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 24 00:11:08.511018 containerd[1981]: time="2025-11-24T00:11:08.510960362Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:08.515755 containerd[1981]: time="2025-11-24T00:11:08.515697121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:08.519417 containerd[1981]: time="2025-11-24T00:11:08.519364198Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.113872346s" Nov 24 00:11:08.519603 containerd[1981]: time="2025-11-24T00:11:08.519582222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 24 00:11:08.521273 containerd[1981]: time="2025-11-24T00:11:08.521231737Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 24 00:11:08.589913 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 24 00:11:09.018698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043257184.mount: Deactivated successfully. Nov 24 00:11:09.028995 containerd[1981]: time="2025-11-24T00:11:09.028750511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:11:09.031829 containerd[1981]: time="2025-11-24T00:11:09.031628277Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 00:11:09.033407 containerd[1981]: time="2025-11-24T00:11:09.033363495Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:11:09.036600 containerd[1981]: time="2025-11-24T00:11:09.036511351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 00:11:09.038969 containerd[1981]: time="2025-11-24T00:11:09.038157111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 516.884545ms" Nov 24 00:11:09.038969 containerd[1981]: time="2025-11-24T00:11:09.038201421Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 24 00:11:09.039153 containerd[1981]: time="2025-11-24T00:11:09.039115339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 24 00:11:09.560629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3030799087.mount: Deactivated successfully. Nov 24 00:11:12.496620 containerd[1981]: time="2025-11-24T00:11:12.496320012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:12.498349 containerd[1981]: time="2025-11-24T00:11:12.498300951Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 24 00:11:12.500278 containerd[1981]: time="2025-11-24T00:11:12.499835478Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:12.506937 containerd[1981]: time="2025-11-24T00:11:12.504929286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:12.508591 containerd[1981]: time="2025-11-24T00:11:12.507726084Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.468576489s" Nov 24 00:11:12.508591 containerd[1981]: time="2025-11-24T00:11:12.507785874Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 24 00:11:13.879727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 24 00:11:13.885155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:11:14.200166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:11:14.218386 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 00:11:14.293515 kubelet[2831]: E1124 00:11:14.293467 2831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 00:11:14.297115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 00:11:14.297317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 00:11:14.297794 systemd[1]: kubelet.service: Consumed 228ms CPU time, 108.6M memory peak. Nov 24 00:11:16.194833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:11:16.195093 systemd[1]: kubelet.service: Consumed 228ms CPU time, 108.6M memory peak. Nov 24 00:11:16.198134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:11:16.235522 systemd[1]: Reload requested from client PID 2845 ('systemctl') (unit session-9.scope)... Nov 24 00:11:16.235549 systemd[1]: Reloading... Nov 24 00:11:16.431889 zram_generator::config[2890]: No configuration found. Nov 24 00:11:16.727607 systemd[1]: Reloading finished in 491 ms. Nov 24 00:11:16.801580 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 00:11:16.801698 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 00:11:16.802111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:11:16.802162 systemd[1]: kubelet.service: Consumed 153ms CPU time, 98.3M memory peak. Nov 24 00:11:16.804398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:11:17.104931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:11:17.117511 (kubelet)[2952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:11:17.171998 kubelet[2952]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:11:17.171998 kubelet[2952]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:11:17.171998 kubelet[2952]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:11:17.172601 kubelet[2952]: I1124 00:11:17.172090 2952 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:11:17.459418 kubelet[2952]: I1124 00:11:17.459366 2952 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 24 00:11:17.459418 kubelet[2952]: I1124 00:11:17.459404 2952 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:11:17.459772 kubelet[2952]: I1124 00:11:17.459747 2952 server.go:954] "Client rotation is on, will bootstrap in background" Nov 24 00:11:17.528252 kubelet[2952]: E1124 00:11:17.528108 2952 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:11:17.532100 kubelet[2952]: I1124 00:11:17.532060 2952 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:11:17.565236 kubelet[2952]: I1124 00:11:17.565202 2952 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:11:17.571799 kubelet[2952]: I1124 00:11:17.571763 2952 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:11:17.574320 kubelet[2952]: I1124 00:11:17.574240 2952 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:11:17.574551 kubelet[2952]: I1124 00:11:17.574298 2952 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:11:17.577721 kubelet[2952]: I1124 00:11:17.577654 2952 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:11:17.577872 kubelet[2952]: I1124 00:11:17.577750 2952 container_manager_linux.go:304] "Creating device plugin manager" Nov 24 00:11:17.579783 kubelet[2952]: I1124 00:11:17.579736 2952 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:11:17.586191 kubelet[2952]: I1124 00:11:17.586078 2952 kubelet.go:446] "Attempting to sync node with API server" Nov 24 00:11:17.586191 kubelet[2952]: I1124 00:11:17.586177 2952 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:11:17.589106 kubelet[2952]: I1124 00:11:17.588666 2952 kubelet.go:352] "Adding apiserver pod source" Nov 24 00:11:17.589106 kubelet[2952]: I1124 00:11:17.588702 2952 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:11:17.601721 kubelet[2952]: W1124 00:11:17.601655 2952 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-28&limit=500&resourceVersion=0": dial tcp 172.31.17.28:6443: connect: connection refused Nov 24 00:11:17.602313 kubelet[2952]: E1124 00:11:17.602283 2952 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-28&limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:11:17.602581 kubelet[2952]: I1124 00:11:17.602551 2952 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:11:17.608071 kubelet[2952]: I1124 00:11:17.608030 2952 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 00:11:17.608555 kubelet[2952]: W1124 00:11:17.608111 2952 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 00:11:17.614954 kubelet[2952]: W1124 00:11:17.614632 2952 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.28:6443: connect: connection refused Nov 24 00:11:17.614954 kubelet[2952]: E1124 00:11:17.614687 2952 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:11:17.614954 kubelet[2952]: I1124 00:11:17.614740 2952 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:11:17.614954 kubelet[2952]: I1124 00:11:17.614770 2952 server.go:1287] "Started kubelet" Nov 24 00:11:17.623824 kubelet[2952]: I1124 00:11:17.623794 2952 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:11:17.627605 kubelet[2952]: E1124 00:11:17.623834 2952 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.28:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-28.187ac8e2213c9080 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-28,UID:ip-172-31-17-28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-28,},FirstTimestamp:2025-11-24 00:11:17.614751872 +0000 UTC m=+0.491217336,LastTimestamp:2025-11-24 00:11:17.614751872 +0000 UTC m=+0.491217336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-28,}" Nov 24 00:11:17.627908 kubelet[2952]: I1124 00:11:17.627727 2952 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:11:17.627908 kubelet[2952]: I1124 00:11:17.627889 2952 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:11:17.629445 kubelet[2952]: I1124 00:11:17.629287 2952 server.go:479] "Adding debug handlers to kubelet server" Nov 24 00:11:17.630922 kubelet[2952]: I1124 00:11:17.630830 2952 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:11:17.631161 kubelet[2952]: I1124 00:11:17.631141 2952 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:11:17.633905 kubelet[2952]: E1124 00:11:17.633875 2952 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-28\" not found" Nov 24 00:11:17.638798 kubelet[2952]: I1124 00:11:17.638767 2952 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:11:17.640907 kubelet[2952]: I1124 00:11:17.640514 2952 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:11:17.640907 kubelet[2952]: I1124 00:11:17.640609 2952 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:11:17.641497 kubelet[2952]: W1124 00:11:17.641435 2952 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.28:6443: connect: connection refused Nov 24 00:11:17.641625 kubelet[2952]: E1124 00:11:17.641605 2952 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:11:17.641792 kubelet[2952]: E1124 00:11:17.641759 2952 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": dial tcp 172.31.17.28:6443: connect: connection refused" interval="200ms" Nov 24 00:11:17.652643 kubelet[2952]: E1124 00:11:17.652602 2952 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:11:17.652925 kubelet[2952]: I1124 00:11:17.652906 2952 factory.go:221] Registration of the containerd container factory successfully Nov 24 00:11:17.652925 kubelet[2952]: I1124 00:11:17.652925 2952 factory.go:221] Registration of the systemd container factory successfully Nov 24 00:11:17.653077 kubelet[2952]: I1124 00:11:17.653055 2952 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:11:17.661881 kubelet[2952]: I1124 00:11:17.659890 2952 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 00:11:17.661881 kubelet[2952]: I1124 00:11:17.661578 2952 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 00:11:17.661881 kubelet[2952]: I1124 00:11:17.661606 2952 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 24 00:11:17.661881 kubelet[2952]: I1124 00:11:17.661632 2952 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:11:17.661881 kubelet[2952]: I1124 00:11:17.661644 2952 kubelet.go:2382] "Starting kubelet main sync loop" Nov 24 00:11:17.661881 kubelet[2952]: E1124 00:11:17.661700 2952 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:11:17.669061 kubelet[2952]: W1124 00:11:17.668998 2952 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.28:6443: connect: connection refused Nov 24 00:11:17.673430 kubelet[2952]: E1124 00:11:17.673341 2952 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:11:17.684662 kubelet[2952]: I1124 00:11:17.684633 2952 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:11:17.684662 kubelet[2952]: I1124 00:11:17.684656 2952 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:11:17.684936 kubelet[2952]: I1124 00:11:17.684694 2952 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:11:17.686809 kubelet[2952]: I1124 00:11:17.686771 2952 policy_none.go:49] "None policy: Start" Nov 24 00:11:17.686809 kubelet[2952]: I1124 00:11:17.686801 2952 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:11:17.687000 kubelet[2952]: I1124 00:11:17.686825 2952 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:11:17.695167 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 00:11:17.710135 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 00:11:17.716810 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 00:11:17.729319 kubelet[2952]: I1124 00:11:17.729274 2952 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 00:11:17.729622 kubelet[2952]: I1124 00:11:17.729489 2952 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:11:17.729622 kubelet[2952]: I1124 00:11:17.729504 2952 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:11:17.731784 kubelet[2952]: I1124 00:11:17.731761 2952 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:11:17.733956 kubelet[2952]: E1124 00:11:17.733872 2952 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:11:17.733956 kubelet[2952]: E1124 00:11:17.733932 2952 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-28\" not found" Nov 24 00:11:17.775810 systemd[1]: Created slice kubepods-burstable-podf1833ea8f8f8be828d3c9868a5be23c7.slice - libcontainer container kubepods-burstable-podf1833ea8f8f8be828d3c9868a5be23c7.slice. Nov 24 00:11:17.787134 kubelet[2952]: E1124 00:11:17.786827 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:17.789558 systemd[1]: Created slice kubepods-burstable-pod407286a1fda5cdf07c91f47e04839253.slice - libcontainer container kubepods-burstable-pod407286a1fda5cdf07c91f47e04839253.slice. Nov 24 00:11:17.808788 kubelet[2952]: E1124 00:11:17.808604 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:17.813492 systemd[1]: Created slice kubepods-burstable-pod02952c0be9c878d0052f76252e24e396.slice - libcontainer container kubepods-burstable-pod02952c0be9c878d0052f76252e24e396.slice. Nov 24 00:11:17.818866 kubelet[2952]: E1124 00:11:17.817969 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:17.832532 kubelet[2952]: I1124 00:11:17.832498 2952 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Nov 24 00:11:17.833042 kubelet[2952]: E1124 00:11:17.833001 2952 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.28:6443/api/v1/nodes\": dial tcp 172.31.17.28:6443: connect: connection refused" node="ip-172-31-17-28" Nov 24 00:11:17.843182 kubelet[2952]: I1124 00:11:17.842971 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:17.843182 kubelet[2952]: I1124 00:11:17.843008 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:17.843182 kubelet[2952]: E1124 00:11:17.843012 2952 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": dial tcp 172.31.17.28:6443: connect: connection refused" interval="400ms" Nov 24 00:11:17.843182 kubelet[2952]: I1124 00:11:17.843033 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02952c0be9c878d0052f76252e24e396-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-28\" (UID: \"02952c0be9c878d0052f76252e24e396\") " pod="kube-system/kube-scheduler-ip-172-31-17-28" Nov 24 00:11:17.843182 kubelet[2952]: I1124 00:11:17.843050 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1833ea8f8f8be828d3c9868a5be23c7-ca-certs\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"f1833ea8f8f8be828d3c9868a5be23c7\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:17.843542 kubelet[2952]: I1124 00:11:17.843066 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1833ea8f8f8be828d3c9868a5be23c7-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"f1833ea8f8f8be828d3c9868a5be23c7\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:17.843542 kubelet[2952]: I1124 00:11:17.843082 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:17.843542 kubelet[2952]: I1124 00:11:17.843098 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:17.843542 kubelet[2952]: I1124 00:11:17.843113 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1833ea8f8f8be828d3c9868a5be23c7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"f1833ea8f8f8be828d3c9868a5be23c7\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:17.843542 kubelet[2952]: I1124 00:11:17.843128 2952 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:18.036980 kubelet[2952]: I1124 00:11:18.036876 2952 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Nov 24 00:11:18.038048 kubelet[2952]: E1124 00:11:18.038009 2952 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.28:6443/api/v1/nodes\": dial tcp 172.31.17.28:6443: connect: connection refused" node="ip-172-31-17-28" Nov 24 00:11:18.090255 containerd[1981]: time="2025-11-24T00:11:18.089389519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-28,Uid:f1833ea8f8f8be828d3c9868a5be23c7,Namespace:kube-system,Attempt:0,}" Nov 24 00:11:18.111412 containerd[1981]: time="2025-11-24T00:11:18.111341465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-28,Uid:407286a1fda5cdf07c91f47e04839253,Namespace:kube-system,Attempt:0,}" Nov 24 00:11:18.129614 containerd[1981]: time="2025-11-24T00:11:18.129562684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-28,Uid:02952c0be9c878d0052f76252e24e396,Namespace:kube-system,Attempt:0,}" Nov 24 00:11:18.245096 kubelet[2952]: E1124 00:11:18.245010 2952 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": dial tcp 172.31.17.28:6443: connect: connection refused" interval="800ms" Nov 24 00:11:18.304398 containerd[1981]: time="2025-11-24T00:11:18.303781390Z" level=info msg="connecting to shim 670c7c7367aa67b9be6806530b7524fa9cd05dbdd1012a0b4259109526770f63" address="unix:///run/containerd/s/395a610f3b2d10e0b83d63c5dcde8e9eb7f8a29ef9cad368de362e5d00c4ad66" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:11:18.309307 containerd[1981]: time="2025-11-24T00:11:18.309258985Z" level=info msg="connecting to shim d221ca9058ba693c935041e90a6539f1edf6372c9472b487d8691e8944e83fe9" address="unix:///run/containerd/s/f583446f36db2f3874765118543aee0c70448b9c8603bab4ac422b20c9355193" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:11:18.309677 containerd[1981]: time="2025-11-24T00:11:18.309651721Z" level=info msg="connecting to shim bce437417f46bf67eb814b3fdfb394b0fd827b1cc98bc763b55043315e1ff761" address="unix:///run/containerd/s/a14ac068c3d820e067dfc25f59df587967113c8fbf853f48a9da88de6f4c36aa" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:11:18.443763 kubelet[2952]: I1124 00:11:18.443224 2952 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Nov 24 00:11:18.443763 kubelet[2952]: E1124 00:11:18.443626 2952 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.28:6443/api/v1/nodes\": dial tcp 172.31.17.28:6443: connect: connection refused" node="ip-172-31-17-28" Nov 24 00:11:18.461082 systemd[1]: Started cri-containerd-670c7c7367aa67b9be6806530b7524fa9cd05dbdd1012a0b4259109526770f63.scope - libcontainer container 670c7c7367aa67b9be6806530b7524fa9cd05dbdd1012a0b4259109526770f63. Nov 24 00:11:18.462866 systemd[1]: Started cri-containerd-bce437417f46bf67eb814b3fdfb394b0fd827b1cc98bc763b55043315e1ff761.scope - libcontainer container bce437417f46bf67eb814b3fdfb394b0fd827b1cc98bc763b55043315e1ff761. Nov 24 00:11:18.465152 systemd[1]: Started cri-containerd-d221ca9058ba693c935041e90a6539f1edf6372c9472b487d8691e8944e83fe9.scope - libcontainer container d221ca9058ba693c935041e90a6539f1edf6372c9472b487d8691e8944e83fe9. Nov 24 00:11:18.595169 containerd[1981]: time="2025-11-24T00:11:18.594497278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-28,Uid:407286a1fda5cdf07c91f47e04839253,Namespace:kube-system,Attempt:0,} returns sandbox id \"bce437417f46bf67eb814b3fdfb394b0fd827b1cc98bc763b55043315e1ff761\"" Nov 24 00:11:18.602656 kubelet[2952]: W1124 00:11:18.602516 2952 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.28:6443: connect: connection refused Nov 24 00:11:18.602656 kubelet[2952]: E1124 00:11:18.602597 2952 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:11:18.607107 containerd[1981]: time="2025-11-24T00:11:18.606507465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-28,Uid:f1833ea8f8f8be828d3c9868a5be23c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"670c7c7367aa67b9be6806530b7524fa9cd05dbdd1012a0b4259109526770f63\"" Nov 24 00:11:18.612470 containerd[1981]: time="2025-11-24T00:11:18.611791823Z" level=info msg="CreateContainer within sandbox \"bce437417f46bf67eb814b3fdfb394b0fd827b1cc98bc763b55043315e1ff761\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 00:11:18.613253 containerd[1981]: time="2025-11-24T00:11:18.613221350Z" level=info msg="CreateContainer within sandbox \"670c7c7367aa67b9be6806530b7524fa9cd05dbdd1012a0b4259109526770f63\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 00:11:18.634399 kubelet[2952]: W1124 00:11:18.634266 2952 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-28&limit=500&resourceVersion=0": dial tcp 172.31.17.28:6443: connect: connection refused Nov 24 00:11:18.634670 kubelet[2952]: E1124 00:11:18.634647 2952 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-28&limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:11:18.639744 containerd[1981]: time="2025-11-24T00:11:18.639707590Z" level=info msg="Container 4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:11:18.645291 containerd[1981]: time="2025-11-24T00:11:18.644554750Z" level=info msg="Container 1bc491f0b44475b437ad70e6686d73facc1e842bc78c94534ed73de832d57dd8: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:11:18.646444 containerd[1981]: time="2025-11-24T00:11:18.646401835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-28,Uid:02952c0be9c878d0052f76252e24e396,Namespace:kube-system,Attempt:0,} returns sandbox id \"d221ca9058ba693c935041e90a6539f1edf6372c9472b487d8691e8944e83fe9\"" Nov 24 00:11:18.649779 containerd[1981]: time="2025-11-24T00:11:18.649750871Z" level=info msg="CreateContainer within sandbox \"d221ca9058ba693c935041e90a6539f1edf6372c9472b487d8691e8944e83fe9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 00:11:18.660867 containerd[1981]: time="2025-11-24T00:11:18.660748294Z" level=info msg="CreateContainer within sandbox \"bce437417f46bf67eb814b3fdfb394b0fd827b1cc98bc763b55043315e1ff761\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287\"" Nov 24 00:11:18.662241 containerd[1981]: time="2025-11-24T00:11:18.662209545Z" level=info msg="StartContainer for \"4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287\"" Nov 24 00:11:18.663983 containerd[1981]: time="2025-11-24T00:11:18.663939802Z" level=info msg="connecting to shim 4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287" address="unix:///run/containerd/s/a14ac068c3d820e067dfc25f59df587967113c8fbf853f48a9da88de6f4c36aa" protocol=ttrpc version=3 Nov 24 00:11:18.672058 containerd[1981]: time="2025-11-24T00:11:18.671950182Z" level=info msg="CreateContainer within sandbox \"670c7c7367aa67b9be6806530b7524fa9cd05dbdd1012a0b4259109526770f63\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1bc491f0b44475b437ad70e6686d73facc1e842bc78c94534ed73de832d57dd8\"" Nov 24 00:11:18.673599 containerd[1981]: time="2025-11-24T00:11:18.673502363Z" level=info msg="StartContainer for \"1bc491f0b44475b437ad70e6686d73facc1e842bc78c94534ed73de832d57dd8\"" Nov 24 00:11:18.675921 containerd[1981]: time="2025-11-24T00:11:18.675819338Z" level=info msg="connecting to shim 1bc491f0b44475b437ad70e6686d73facc1e842bc78c94534ed73de832d57dd8" address="unix:///run/containerd/s/395a610f3b2d10e0b83d63c5dcde8e9eb7f8a29ef9cad368de362e5d00c4ad66" protocol=ttrpc version=3 Nov 24 00:11:18.682054 containerd[1981]: time="2025-11-24T00:11:18.681713548Z" level=info msg="Container c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:11:18.698482 systemd[1]: Started cri-containerd-4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287.scope - libcontainer container 4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287. Nov 24 00:11:18.711269 containerd[1981]: time="2025-11-24T00:11:18.711211759Z" level=info msg="CreateContainer within sandbox \"d221ca9058ba693c935041e90a6539f1edf6372c9472b487d8691e8944e83fe9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349\"" Nov 24 00:11:18.712926 containerd[1981]: time="2025-11-24T00:11:18.712892440Z" level=info msg="StartContainer for \"c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349\"" Nov 24 00:11:18.716341 containerd[1981]: time="2025-11-24T00:11:18.716253964Z" level=info msg="connecting to shim c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349" address="unix:///run/containerd/s/f583446f36db2f3874765118543aee0c70448b9c8603bab4ac422b20c9355193" protocol=ttrpc version=3 Nov 24 00:11:18.732998 systemd[1]: Started cri-containerd-1bc491f0b44475b437ad70e6686d73facc1e842bc78c94534ed73de832d57dd8.scope - libcontainer container 1bc491f0b44475b437ad70e6686d73facc1e842bc78c94534ed73de832d57dd8. Nov 24 00:11:18.772073 systemd[1]: Started cri-containerd-c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349.scope - libcontainer container c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349. Nov 24 00:11:18.926476 containerd[1981]: time="2025-11-24T00:11:18.926435615Z" level=info msg="StartContainer for \"4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287\" returns successfully" Nov 24 00:11:18.932178 containerd[1981]: time="2025-11-24T00:11:18.932125714Z" level=info msg="StartContainer for \"1bc491f0b44475b437ad70e6686d73facc1e842bc78c94534ed73de832d57dd8\" returns successfully" Nov 24 00:11:18.975803 containerd[1981]: time="2025-11-24T00:11:18.975753004Z" level=info msg="StartContainer for \"c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349\" returns successfully" Nov 24 00:11:19.046585 kubelet[2952]: E1124 00:11:19.046533 2952 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": dial tcp 172.31.17.28:6443: connect: connection refused" interval="1.6s" Nov 24 00:11:19.136968 kubelet[2952]: W1124 00:11:19.136824 2952 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.28:6443: connect: connection refused Nov 24 00:11:19.136968 kubelet[2952]: E1124 00:11:19.136936 2952 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:11:19.163926 kubelet[2952]: W1124 00:11:19.163778 2952 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.28:6443: connect: connection refused Nov 24 00:11:19.163926 kubelet[2952]: E1124 00:11:19.163889 2952 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.28:6443: connect: connection refused" logger="UnhandledError" Nov 24 00:11:19.246838 kubelet[2952]: I1124 00:11:19.246593 2952 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Nov 24 00:11:19.248708 kubelet[2952]: E1124 00:11:19.248671 2952 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.28:6443/api/v1/nodes\": dial tcp 172.31.17.28:6443: connect: connection refused" node="ip-172-31-17-28" Nov 24 00:11:19.741488 kubelet[2952]: E1124 00:11:19.739957 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:19.742823 kubelet[2952]: E1124 00:11:19.742798 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:19.749740 kubelet[2952]: E1124 00:11:19.749550 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:20.754760 kubelet[2952]: E1124 00:11:20.754231 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:20.754760 kubelet[2952]: E1124 00:11:20.754609 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:20.755518 kubelet[2952]: E1124 00:11:20.755500 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:20.852943 kubelet[2952]: I1124 00:11:20.852380 2952 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Nov 24 00:11:21.759101 kubelet[2952]: E1124 00:11:21.758172 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:21.761342 kubelet[2952]: E1124 00:11:21.761228 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:21.811499 kubelet[2952]: E1124 00:11:21.811460 2952 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:22.684736 update_engine[1956]: I20251124 00:11:22.684643 1956 update_attempter.cc:509] Updating boot flags... Nov 24 00:11:22.871047 kubelet[2952]: E1124 00:11:22.870956 2952 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-28\" not found" node="ip-172-31-17-28" Nov 24 00:11:22.938732 kubelet[2952]: E1124 00:11:22.938544 2952 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-17-28.187ac8e2213c9080 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-28,UID:ip-172-31-17-28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-28,},FirstTimestamp:2025-11-24 00:11:17.614751872 +0000 UTC m=+0.491217336,LastTimestamp:2025-11-24 00:11:17.614751872 +0000 UTC m=+0.491217336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-28,}" Nov 24 00:11:23.009754 kubelet[2952]: I1124 00:11:23.009703 2952 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-28" Nov 24 00:11:23.016399 kubelet[2952]: E1124 00:11:23.016249 2952 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-17-28.187ac8e2237d899c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-28,UID:ip-172-31-17-28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-17-28,},FirstTimestamp:2025-11-24 00:11:17.65256438 +0000 UTC m=+0.529029846,LastTimestamp:2025-11-24 00:11:17.65256438 +0000 UTC m=+0.529029846,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-28,}" Nov 24 00:11:23.037652 kubelet[2952]: I1124 00:11:23.036246 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-28" Nov 24 00:11:23.101969 kubelet[2952]: E1124 00:11:23.101930 2952 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-28" Nov 24 00:11:23.101969 kubelet[2952]: I1124 00:11:23.101969 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:23.129779 kubelet[2952]: E1124 00:11:23.129530 2952 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:23.129779 kubelet[2952]: I1124 00:11:23.129576 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:23.153000 kubelet[2952]: E1124 00:11:23.152950 2952 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:23.225297 kubelet[2952]: I1124 00:11:23.224707 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-28" Nov 24 00:11:23.234397 kubelet[2952]: E1124 00:11:23.234228 2952 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-28" Nov 24 00:11:23.619765 kubelet[2952]: I1124 00:11:23.619087 2952 apiserver.go:52] "Watching apiserver" Nov 24 00:11:23.640750 kubelet[2952]: I1124 00:11:23.640709 2952 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:11:24.824397 kubelet[2952]: I1124 00:11:24.824355 2952 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:25.579328 systemd[1]: Reload requested from client PID 3495 ('systemctl') (unit session-9.scope)... Nov 24 00:11:25.579345 systemd[1]: Reloading... Nov 24 00:11:25.711875 zram_generator::config[3539]: No configuration found. Nov 24 00:11:26.040200 systemd[1]: Reloading finished in 460 ms. Nov 24 00:11:26.081474 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:11:26.090156 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 00:11:26.090553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:11:26.090639 systemd[1]: kubelet.service: Consumed 1.043s CPU time, 129.9M memory peak. Nov 24 00:11:26.093612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 00:11:26.380598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 00:11:26.396407 (kubelet)[3599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 00:11:26.485745 kubelet[3599]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:11:26.485745 kubelet[3599]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 00:11:26.485745 kubelet[3599]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 00:11:26.486277 kubelet[3599]: I1124 00:11:26.485894 3599 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 00:11:26.500573 kubelet[3599]: I1124 00:11:26.500519 3599 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 24 00:11:26.500573 kubelet[3599]: I1124 00:11:26.500553 3599 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 00:11:26.501832 kubelet[3599]: I1124 00:11:26.501142 3599 server.go:954] "Client rotation is on, will bootstrap in background" Nov 24 00:11:26.504282 kubelet[3599]: I1124 00:11:26.504116 3599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 00:11:26.509815 kubelet[3599]: I1124 00:11:26.509775 3599 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 00:11:26.514493 kubelet[3599]: I1124 00:11:26.514445 3599 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 00:11:26.517834 kubelet[3599]: I1124 00:11:26.517779 3599 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 24 00:11:26.518203 kubelet[3599]: I1124 00:11:26.518171 3599 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 00:11:26.518456 kubelet[3599]: I1124 00:11:26.518287 3599 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 00:11:26.518583 kubelet[3599]: I1124 00:11:26.518573 3599 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 00:11:26.518626 kubelet[3599]: I1124 00:11:26.518621 3599 container_manager_linux.go:304] "Creating device plugin manager" Nov 24 00:11:26.518718 kubelet[3599]: I1124 00:11:26.518709 3599 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:11:26.518925 kubelet[3599]: I1124 00:11:26.518915 3599 kubelet.go:446] "Attempting to sync node with API server" Nov 24 00:11:26.519565 kubelet[3599]: I1124 00:11:26.519086 3599 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 00:11:26.519565 kubelet[3599]: I1124 00:11:26.519118 3599 kubelet.go:352] "Adding apiserver pod source" Nov 24 00:11:26.519565 kubelet[3599]: I1124 00:11:26.519129 3599 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 00:11:26.526066 kubelet[3599]: I1124 00:11:26.525977 3599 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 00:11:26.530372 kubelet[3599]: I1124 00:11:26.530347 3599 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 00:11:26.531941 kubelet[3599]: I1124 00:11:26.531913 3599 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 24 00:11:26.532104 kubelet[3599]: I1124 00:11:26.532096 3599 server.go:1287] "Started kubelet" Nov 24 00:11:26.539716 kubelet[3599]: I1124 00:11:26.539687 3599 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 00:11:26.546920 kubelet[3599]: I1124 00:11:26.546268 3599 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 00:11:26.547767 kubelet[3599]: I1124 00:11:26.547747 3599 server.go:479] "Adding debug handlers to kubelet server" Nov 24 00:11:26.550212 kubelet[3599]: I1124 00:11:26.550192 3599 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 24 00:11:26.557387 kubelet[3599]: I1124 00:11:26.557359 3599 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 24 00:11:26.557685 kubelet[3599]: I1124 00:11:26.557670 3599 reconciler.go:26] "Reconciler: start to sync state" Nov 24 00:11:26.558472 kubelet[3599]: I1124 00:11:26.558397 3599 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 00:11:26.558685 kubelet[3599]: I1124 00:11:26.558666 3599 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 00:11:26.559033 kubelet[3599]: I1124 00:11:26.559011 3599 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 00:11:26.562598 kubelet[3599]: I1124 00:11:26.562433 3599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 00:11:26.565889 kubelet[3599]: I1124 00:11:26.565829 3599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 00:11:26.566005 kubelet[3599]: I1124 00:11:26.565901 3599 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 24 00:11:26.566005 kubelet[3599]: I1124 00:11:26.565930 3599 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 00:11:26.566005 kubelet[3599]: I1124 00:11:26.565939 3599 kubelet.go:2382] "Starting kubelet main sync loop" Nov 24 00:11:26.566005 kubelet[3599]: E1124 00:11:26.565993 3599 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 00:11:26.570910 kubelet[3599]: I1124 00:11:26.569038 3599 factory.go:221] Registration of the systemd container factory successfully Nov 24 00:11:26.571209 kubelet[3599]: I1124 00:11:26.571168 3599 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 00:11:26.582868 kubelet[3599]: E1124 00:11:26.582619 3599 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 00:11:26.585657 kubelet[3599]: I1124 00:11:26.585633 3599 factory.go:221] Registration of the containerd container factory successfully Nov 24 00:11:26.666209 kubelet[3599]: E1124 00:11:26.666108 3599 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 24 00:11:26.669751 kubelet[3599]: I1124 00:11:26.668794 3599 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 00:11:26.669751 kubelet[3599]: I1124 00:11:26.668813 3599 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 00:11:26.669751 kubelet[3599]: I1124 00:11:26.668836 3599 state_mem.go:36] "Initialized new in-memory state store" Nov 24 00:11:26.669751 kubelet[3599]: I1124 00:11:26.669081 3599 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 00:11:26.669751 kubelet[3599]: I1124 00:11:26.669095 3599 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 00:11:26.669751 kubelet[3599]: I1124 00:11:26.669119 3599 policy_none.go:49] "None policy: Start" Nov 24 00:11:26.669751 kubelet[3599]: I1124 00:11:26.669132 3599 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 24 00:11:26.669751 kubelet[3599]: I1124 00:11:26.669145 3599 state_mem.go:35] "Initializing new in-memory state store" Nov 24 00:11:26.669751 kubelet[3599]: I1124 00:11:26.669281 3599 state_mem.go:75] "Updated machine memory state" Nov 24 00:11:26.680121 kubelet[3599]: I1124 00:11:26.679594 3599 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 00:11:26.682122 kubelet[3599]: I1124 00:11:26.681713 3599 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 00:11:26.682122 kubelet[3599]: I1124 00:11:26.681730 3599 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 00:11:26.682542 kubelet[3599]: I1124 00:11:26.682527 3599 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 00:11:26.695323 kubelet[3599]: E1124 00:11:26.695282 3599 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 00:11:26.799336 kubelet[3599]: I1124 00:11:26.799300 3599 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-28" Nov 24 00:11:26.809710 kubelet[3599]: I1124 00:11:26.809335 3599 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-28" Nov 24 00:11:26.809710 kubelet[3599]: I1124 00:11:26.809425 3599 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-28" Nov 24 00:11:26.867223 kubelet[3599]: I1124 00:11:26.867190 3599 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-28" Nov 24 00:11:26.868294 kubelet[3599]: I1124 00:11:26.867904 3599 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:26.868294 kubelet[3599]: I1124 00:11:26.868000 3599 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:26.878643 kubelet[3599]: E1124 00:11:26.878543 3599 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-28\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:26.961954 kubelet[3599]: I1124 00:11:26.960608 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:26.961954 kubelet[3599]: I1124 00:11:26.960666 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:26.961954 kubelet[3599]: I1124 00:11:26.960699 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:26.961954 kubelet[3599]: I1124 00:11:26.960733 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02952c0be9c878d0052f76252e24e396-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-28\" (UID: \"02952c0be9c878d0052f76252e24e396\") " pod="kube-system/kube-scheduler-ip-172-31-17-28" Nov 24 00:11:26.961954 kubelet[3599]: I1124 00:11:26.960761 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f1833ea8f8f8be828d3c9868a5be23c7-ca-certs\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"f1833ea8f8f8be828d3c9868a5be23c7\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:26.962204 kubelet[3599]: I1124 00:11:26.960800 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f1833ea8f8f8be828d3c9868a5be23c7-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"f1833ea8f8f8be828d3c9868a5be23c7\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:26.962204 kubelet[3599]: I1124 00:11:26.960865 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f1833ea8f8f8be828d3c9868a5be23c7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-28\" (UID: \"f1833ea8f8f8be828d3c9868a5be23c7\") " pod="kube-system/kube-apiserver-ip-172-31-17-28" Nov 24 00:11:26.962204 kubelet[3599]: I1124 00:11:26.960891 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:26.962204 kubelet[3599]: I1124 00:11:26.960918 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407286a1fda5cdf07c91f47e04839253-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-28\" (UID: \"407286a1fda5cdf07c91f47e04839253\") " pod="kube-system/kube-controller-manager-ip-172-31-17-28" Nov 24 00:11:27.532958 kubelet[3599]: I1124 00:11:27.532651 3599 apiserver.go:52] "Watching apiserver" Nov 24 00:11:27.558501 kubelet[3599]: I1124 00:11:27.558464 3599 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 24 00:11:27.620789 kubelet[3599]: I1124 00:11:27.620757 3599 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-28" Nov 24 00:11:27.637906 kubelet[3599]: E1124 00:11:27.635018 3599 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-28\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-28" Nov 24 00:11:27.663868 kubelet[3599]: I1124 00:11:27.663722 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-28" podStartSLOduration=1.663703076 podStartE2EDuration="1.663703076s" podCreationTimestamp="2025-11-24 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:11:27.662161306 +0000 UTC m=+1.255962104" watchObservedRunningTime="2025-11-24 00:11:27.663703076 +0000 UTC m=+1.257503874" Nov 24 00:11:27.689874 kubelet[3599]: I1124 00:11:27.689695 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-28" podStartSLOduration=3.689677822 podStartE2EDuration="3.689677822s" podCreationTimestamp="2025-11-24 00:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:11:27.67722024 +0000 UTC m=+1.271021036" watchObservedRunningTime="2025-11-24 00:11:27.689677822 +0000 UTC m=+1.283478616" Nov 24 00:11:28.826398 kubelet[3599]: I1124 00:11:28.826277 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-28" podStartSLOduration=2.826256708 podStartE2EDuration="2.826256708s" podCreationTimestamp="2025-11-24 00:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:11:27.690660708 +0000 UTC m=+1.284461504" watchObservedRunningTime="2025-11-24 00:11:28.826256708 +0000 UTC m=+2.420057506" Nov 24 00:11:29.840256 kubelet[3599]: I1124 00:11:29.840210 3599 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 00:11:29.841381 containerd[1981]: time="2025-11-24T00:11:29.841348320Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 00:11:29.841778 kubelet[3599]: I1124 00:11:29.841760 3599 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 00:11:30.681446 systemd[1]: Created slice kubepods-besteffort-pod547983a3_8696_4493_b796_49b10c32c84a.slice - libcontainer container kubepods-besteffort-pod547983a3_8696_4493_b796_49b10c32c84a.slice. Nov 24 00:11:30.685914 kubelet[3599]: I1124 00:11:30.685699 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/547983a3-8696-4493-b796-49b10c32c84a-xtables-lock\") pod \"kube-proxy-x8fvj\" (UID: \"547983a3-8696-4493-b796-49b10c32c84a\") " pod="kube-system/kube-proxy-x8fvj" Nov 24 00:11:30.687002 kubelet[3599]: I1124 00:11:30.686940 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/547983a3-8696-4493-b796-49b10c32c84a-lib-modules\") pod \"kube-proxy-x8fvj\" (UID: \"547983a3-8696-4493-b796-49b10c32c84a\") " pod="kube-system/kube-proxy-x8fvj" Nov 24 00:11:30.687221 kubelet[3599]: I1124 00:11:30.687178 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/547983a3-8696-4493-b796-49b10c32c84a-kube-proxy\") pod \"kube-proxy-x8fvj\" (UID: \"547983a3-8696-4493-b796-49b10c32c84a\") " pod="kube-system/kube-proxy-x8fvj" Nov 24 00:11:30.687353 kubelet[3599]: I1124 00:11:30.687332 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs24n\" (UniqueName: \"kubernetes.io/projected/547983a3-8696-4493-b796-49b10c32c84a-kube-api-access-xs24n\") pod \"kube-proxy-x8fvj\" (UID: \"547983a3-8696-4493-b796-49b10c32c84a\") " pod="kube-system/kube-proxy-x8fvj" Nov 24 00:11:30.962442 systemd[1]: Created slice kubepods-besteffort-podf05c8e2d_f785_4712_a1e5_8f5d640174db.slice - libcontainer container kubepods-besteffort-podf05c8e2d_f785_4712_a1e5_8f5d640174db.slice. Nov 24 00:11:30.989932 kubelet[3599]: I1124 00:11:30.989855 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f05c8e2d-f785-4712-a1e5-8f5d640174db-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rjkjn\" (UID: \"f05c8e2d-f785-4712-a1e5-8f5d640174db\") " pod="tigera-operator/tigera-operator-7dcd859c48-rjkjn" Nov 24 00:11:30.989932 kubelet[3599]: I1124 00:11:30.989914 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g49hr\" (UniqueName: \"kubernetes.io/projected/f05c8e2d-f785-4712-a1e5-8f5d640174db-kube-api-access-g49hr\") pod \"tigera-operator-7dcd859c48-rjkjn\" (UID: \"f05c8e2d-f785-4712-a1e5-8f5d640174db\") " pod="tigera-operator/tigera-operator-7dcd859c48-rjkjn" Nov 24 00:11:30.991610 containerd[1981]: time="2025-11-24T00:11:30.991575003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8fvj,Uid:547983a3-8696-4493-b796-49b10c32c84a,Namespace:kube-system,Attempt:0,}" Nov 24 00:11:31.027738 containerd[1981]: time="2025-11-24T00:11:31.027632830Z" level=info msg="connecting to shim 8ef1e52317ee89d9ff12265e064f8f4f1eb9ee90ee2e46bf056299abddf5f282" address="unix:///run/containerd/s/06c90728a5a5571c90ffe146fe7dfd07555b558bdb5790c1f6bed198ad9ec327" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:11:31.069121 systemd[1]: Started cri-containerd-8ef1e52317ee89d9ff12265e064f8f4f1eb9ee90ee2e46bf056299abddf5f282.scope - libcontainer container 8ef1e52317ee89d9ff12265e064f8f4f1eb9ee90ee2e46bf056299abddf5f282. Nov 24 00:11:31.118865 containerd[1981]: time="2025-11-24T00:11:31.118751026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x8fvj,Uid:547983a3-8696-4493-b796-49b10c32c84a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ef1e52317ee89d9ff12265e064f8f4f1eb9ee90ee2e46bf056299abddf5f282\"" Nov 24 00:11:31.123654 containerd[1981]: time="2025-11-24T00:11:31.123613771Z" level=info msg="CreateContainer within sandbox \"8ef1e52317ee89d9ff12265e064f8f4f1eb9ee90ee2e46bf056299abddf5f282\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 00:11:31.145386 containerd[1981]: time="2025-11-24T00:11:31.145325930Z" level=info msg="Container 48c97b3a33f1f72d14490068cc7c05fc9985e4e7d86f2aafd73deb9fce4f0e24: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:11:31.159155 containerd[1981]: time="2025-11-24T00:11:31.159018909Z" level=info msg="CreateContainer within sandbox \"8ef1e52317ee89d9ff12265e064f8f4f1eb9ee90ee2e46bf056299abddf5f282\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48c97b3a33f1f72d14490068cc7c05fc9985e4e7d86f2aafd73deb9fce4f0e24\"" Nov 24 00:11:31.159815 containerd[1981]: time="2025-11-24T00:11:31.159790075Z" level=info msg="StartContainer for \"48c97b3a33f1f72d14490068cc7c05fc9985e4e7d86f2aafd73deb9fce4f0e24\"" Nov 24 00:11:31.161758 containerd[1981]: time="2025-11-24T00:11:31.161726069Z" level=info msg="connecting to shim 48c97b3a33f1f72d14490068cc7c05fc9985e4e7d86f2aafd73deb9fce4f0e24" address="unix:///run/containerd/s/06c90728a5a5571c90ffe146fe7dfd07555b558bdb5790c1f6bed198ad9ec327" protocol=ttrpc version=3 Nov 24 00:11:31.181124 systemd[1]: Started cri-containerd-48c97b3a33f1f72d14490068cc7c05fc9985e4e7d86f2aafd73deb9fce4f0e24.scope - libcontainer container 48c97b3a33f1f72d14490068cc7c05fc9985e4e7d86f2aafd73deb9fce4f0e24. Nov 24 00:11:31.259787 containerd[1981]: time="2025-11-24T00:11:31.258809733Z" level=info msg="StartContainer for \"48c97b3a33f1f72d14490068cc7c05fc9985e4e7d86f2aafd73deb9fce4f0e24\" returns successfully" Nov 24 00:11:31.267718 containerd[1981]: time="2025-11-24T00:11:31.267655579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rjkjn,Uid:f05c8e2d-f785-4712-a1e5-8f5d640174db,Namespace:tigera-operator,Attempt:0,}" Nov 24 00:11:31.300983 containerd[1981]: time="2025-11-24T00:11:31.300944008Z" level=info msg="connecting to shim f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809" address="unix:///run/containerd/s/e40270d178c3350a13e3c3cb6415180a02bceecada0ab2973010ae677c5af572" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:11:31.342193 systemd[1]: Started cri-containerd-f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809.scope - libcontainer container f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809. Nov 24 00:11:31.401571 containerd[1981]: time="2025-11-24T00:11:31.401520093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rjkjn,Uid:f05c8e2d-f785-4712-a1e5-8f5d640174db,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809\"" Nov 24 00:11:31.405428 containerd[1981]: time="2025-11-24T00:11:31.405388832Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 00:11:31.643608 kubelet[3599]: I1124 00:11:31.643540 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x8fvj" podStartSLOduration=1.6435016949999999 podStartE2EDuration="1.643501695s" podCreationTimestamp="2025-11-24 00:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:11:31.64215789 +0000 UTC m=+5.235958687" watchObservedRunningTime="2025-11-24 00:11:31.643501695 +0000 UTC m=+5.237302492" Nov 24 00:11:31.807304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount614342516.mount: Deactivated successfully. Nov 24 00:11:32.911995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389218766.mount: Deactivated successfully. Nov 24 00:11:35.460751 containerd[1981]: time="2025-11-24T00:11:35.460688789Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:35.461751 containerd[1981]: time="2025-11-24T00:11:35.461707287Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 00:11:35.463009 containerd[1981]: time="2025-11-24T00:11:35.462951383Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:35.465160 containerd[1981]: time="2025-11-24T00:11:35.465106560Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:35.466123 containerd[1981]: time="2025-11-24T00:11:35.465673119Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.060242565s" Nov 24 00:11:35.466123 containerd[1981]: time="2025-11-24T00:11:35.465714495Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 00:11:35.468084 containerd[1981]: time="2025-11-24T00:11:35.468047713Z" level=info msg="CreateContainer within sandbox \"f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 00:11:35.480082 containerd[1981]: time="2025-11-24T00:11:35.478101313Z" level=info msg="Container 2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:11:35.491552 containerd[1981]: time="2025-11-24T00:11:35.491501627Z" level=info msg="CreateContainer within sandbox \"f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700\"" Nov 24 00:11:35.494129 containerd[1981]: time="2025-11-24T00:11:35.494072484Z" level=info msg="StartContainer for \"2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700\"" Nov 24 00:11:35.495327 containerd[1981]: time="2025-11-24T00:11:35.495286212Z" level=info msg="connecting to shim 2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700" address="unix:///run/containerd/s/e40270d178c3350a13e3c3cb6415180a02bceecada0ab2973010ae677c5af572" protocol=ttrpc version=3 Nov 24 00:11:35.531638 systemd[1]: Started cri-containerd-2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700.scope - libcontainer container 2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700. Nov 24 00:11:35.574770 containerd[1981]: time="2025-11-24T00:11:35.574722851Z" level=info msg="StartContainer for \"2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700\" returns successfully" Nov 24 00:11:35.670320 kubelet[3599]: I1124 00:11:35.670257 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rjkjn" podStartSLOduration=1.6086191159999998 podStartE2EDuration="5.670241475s" podCreationTimestamp="2025-11-24 00:11:30 +0000 UTC" firstStartedPulling="2025-11-24 00:11:31.40474926 +0000 UTC m=+4.998550036" lastFinishedPulling="2025-11-24 00:11:35.46637162 +0000 UTC m=+9.060172395" observedRunningTime="2025-11-24 00:11:35.669606597 +0000 UTC m=+9.263407395" watchObservedRunningTime="2025-11-24 00:11:35.670241475 +0000 UTC m=+9.264042270" Nov 24 00:11:39.328078 systemd[1]: cri-containerd-2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700.scope: Deactivated successfully. Nov 24 00:11:39.421718 containerd[1981]: time="2025-11-24T00:11:39.421653549Z" level=info msg="received container exit event container_id:\"2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700\" id:\"2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700\" pid:3911 exit_status:1 exited_at:{seconds:1763943099 nanos:331816685}" Nov 24 00:11:39.466337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700-rootfs.mount: Deactivated successfully. Nov 24 00:11:39.661898 kubelet[3599]: I1124 00:11:39.661872 3599 scope.go:117] "RemoveContainer" containerID="2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700" Nov 24 00:11:39.668938 containerd[1981]: time="2025-11-24T00:11:39.668866594Z" level=info msg="CreateContainer within sandbox \"f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 24 00:11:39.690559 containerd[1981]: time="2025-11-24T00:11:39.688986511Z" level=info msg="Container ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:11:39.696180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941411919.mount: Deactivated successfully. Nov 24 00:11:39.705103 containerd[1981]: time="2025-11-24T00:11:39.705063588Z" level=info msg="CreateContainer within sandbox \"f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07\"" Nov 24 00:11:39.706332 containerd[1981]: time="2025-11-24T00:11:39.706301321Z" level=info msg="StartContainer for \"ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07\"" Nov 24 00:11:39.707707 containerd[1981]: time="2025-11-24T00:11:39.707601413Z" level=info msg="connecting to shim ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07" address="unix:///run/containerd/s/e40270d178c3350a13e3c3cb6415180a02bceecada0ab2973010ae677c5af572" protocol=ttrpc version=3 Nov 24 00:11:39.748617 systemd[1]: Started cri-containerd-ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07.scope - libcontainer container ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07. Nov 24 00:11:39.830455 containerd[1981]: time="2025-11-24T00:11:39.830417169Z" level=info msg="StartContainer for \"ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07\" returns successfully" Nov 24 00:11:43.147018 sudo[2372]: pam_unix(sudo:session): session closed for user root Nov 24 00:11:43.172428 sshd[2370]: Connection closed by 139.178.68.195 port 53116 Nov 24 00:11:43.174109 sshd-session[2356]: pam_unix(sshd:session): session closed for user core Nov 24 00:11:43.179970 systemd[1]: sshd@8-172.31.17.28:22-139.178.68.195:53116.service: Deactivated successfully. Nov 24 00:11:43.183401 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 00:11:43.183642 systemd[1]: session-9.scope: Consumed 5.499s CPU time, 151.1M memory peak. Nov 24 00:11:43.185207 systemd-logind[1955]: Session 9 logged out. Waiting for processes to exit. Nov 24 00:11:43.187108 systemd-logind[1955]: Removed session 9. Nov 24 00:11:51.066287 systemd[1]: Created slice kubepods-besteffort-podbc7f41de_b5d6_463f_84da_86a7ed36bd6f.slice - libcontainer container kubepods-besteffort-podbc7f41de_b5d6_463f_84da_86a7ed36bd6f.slice. Nov 24 00:11:51.137558 kubelet[3599]: I1124 00:11:51.137476 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r66sn\" (UniqueName: \"kubernetes.io/projected/bc7f41de-b5d6-463f-84da-86a7ed36bd6f-kube-api-access-r66sn\") pod \"calico-typha-6c998bdd5d-5gvbv\" (UID: \"bc7f41de-b5d6-463f-84da-86a7ed36bd6f\") " pod="calico-system/calico-typha-6c998bdd5d-5gvbv" Nov 24 00:11:51.138047 kubelet[3599]: I1124 00:11:51.137575 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bc7f41de-b5d6-463f-84da-86a7ed36bd6f-typha-certs\") pod \"calico-typha-6c998bdd5d-5gvbv\" (UID: \"bc7f41de-b5d6-463f-84da-86a7ed36bd6f\") " pod="calico-system/calico-typha-6c998bdd5d-5gvbv" Nov 24 00:11:51.138047 kubelet[3599]: I1124 00:11:51.137648 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7f41de-b5d6-463f-84da-86a7ed36bd6f-tigera-ca-bundle\") pod \"calico-typha-6c998bdd5d-5gvbv\" (UID: \"bc7f41de-b5d6-463f-84da-86a7ed36bd6f\") " pod="calico-system/calico-typha-6c998bdd5d-5gvbv" Nov 24 00:11:51.322436 systemd[1]: Created slice kubepods-besteffort-pod2f4bb7eb_173c_46cc_9ccb_24d84f90c9ab.slice - libcontainer container kubepods-besteffort-pod2f4bb7eb_173c_46cc_9ccb_24d84f90c9ab.slice. Nov 24 00:11:51.338721 kubelet[3599]: I1124 00:11:51.338262 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-policysync\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.338721 kubelet[3599]: I1124 00:11:51.338313 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5dcj\" (UniqueName: \"kubernetes.io/projected/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-kube-api-access-g5dcj\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.338721 kubelet[3599]: I1124 00:11:51.338344 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-cni-bin-dir\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.338721 kubelet[3599]: I1124 00:11:51.338366 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-node-certs\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.338721 kubelet[3599]: I1124 00:11:51.338389 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-cni-log-dir\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.339082 kubelet[3599]: I1124 00:11:51.338411 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-tigera-ca-bundle\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.339082 kubelet[3599]: I1124 00:11:51.338436 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-var-lib-calico\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.339082 kubelet[3599]: I1124 00:11:51.338457 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-var-run-calico\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.339082 kubelet[3599]: I1124 00:11:51.338492 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-flexvol-driver-host\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.339082 kubelet[3599]: I1124 00:11:51.338516 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-xtables-lock\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.339290 kubelet[3599]: I1124 00:11:51.338542 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-lib-modules\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.339290 kubelet[3599]: I1124 00:11:51.338567 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab-cni-net-dir\") pod \"calico-node-tjn4m\" (UID: \"2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab\") " pod="calico-system/calico-node-tjn4m" Nov 24 00:11:51.378576 containerd[1981]: time="2025-11-24T00:11:51.378522332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c998bdd5d-5gvbv,Uid:bc7f41de-b5d6-463f-84da-86a7ed36bd6f,Namespace:calico-system,Attempt:0,}" Nov 24 00:11:51.459236 kubelet[3599]: E1124 00:11:51.459203 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.460868 kubelet[3599]: W1124 00:11:51.460292 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.467870 kubelet[3599]: E1124 00:11:51.465984 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.482874 kubelet[3599]: E1124 00:11:51.481063 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.482874 kubelet[3599]: W1124 00:11:51.481098 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.482874 kubelet[3599]: E1124 00:11:51.481124 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.490679 kubelet[3599]: E1124 00:11:51.489645 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.490679 kubelet[3599]: W1124 00:11:51.489707 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.490679 kubelet[3599]: E1124 00:11:51.489944 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.492153 containerd[1981]: time="2025-11-24T00:11:51.490040779Z" level=info msg="connecting to shim c2c50bccd57a9c0f09db85ecf5f70c226a1c7cd1439267a0913ab310a256e00a" address="unix:///run/containerd/s/8d4a0a64681ebc241911519f2513e94c06c6299c161a194789c93ee3d57dd6c7" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:11:51.504277 kubelet[3599]: E1124 00:11:51.503491 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:11:51.510917 kubelet[3599]: E1124 00:11:51.510881 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.510917 kubelet[3599]: W1124 00:11:51.510918 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.511314 kubelet[3599]: E1124 00:11:51.511007 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.511744 kubelet[3599]: E1124 00:11:51.511722 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.511825 kubelet[3599]: W1124 00:11:51.511746 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.511825 kubelet[3599]: E1124 00:11:51.511767 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.512419 kubelet[3599]: E1124 00:11:51.512359 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.512419 kubelet[3599]: W1124 00:11:51.512374 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.513207 kubelet[3599]: E1124 00:11:51.512392 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.513207 kubelet[3599]: E1124 00:11:51.513081 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.513207 kubelet[3599]: W1124 00:11:51.513095 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.513207 kubelet[3599]: E1124 00:11:51.513113 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.513890 kubelet[3599]: E1124 00:11:51.513805 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.513890 kubelet[3599]: W1124 00:11:51.513821 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.514761 kubelet[3599]: E1124 00:11:51.513838 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.514761 kubelet[3599]: E1124 00:11:51.514607 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.514761 kubelet[3599]: W1124 00:11:51.514619 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.514761 kubelet[3599]: E1124 00:11:51.514633 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.514998 kubelet[3599]: E1124 00:11:51.514829 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.514998 kubelet[3599]: W1124 00:11:51.514839 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.514998 kubelet[3599]: E1124 00:11:51.514880 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.516036 kubelet[3599]: E1124 00:11:51.515920 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.516036 kubelet[3599]: W1124 00:11:51.515937 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.516036 kubelet[3599]: E1124 00:11:51.515951 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.516268 kubelet[3599]: E1124 00:11:51.516188 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.516268 kubelet[3599]: W1124 00:11:51.516198 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.516268 kubelet[3599]: E1124 00:11:51.516211 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.518469 kubelet[3599]: E1124 00:11:51.518438 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.518469 kubelet[3599]: W1124 00:11:51.518459 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.518946 kubelet[3599]: E1124 00:11:51.518477 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.518946 kubelet[3599]: E1124 00:11:51.518697 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.518946 kubelet[3599]: W1124 00:11:51.518708 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.518946 kubelet[3599]: E1124 00:11:51.518721 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.520070 kubelet[3599]: E1124 00:11:51.519141 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.520070 kubelet[3599]: W1124 00:11:51.519152 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.520070 kubelet[3599]: E1124 00:11:51.519165 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.520070 kubelet[3599]: E1124 00:11:51.519415 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.520070 kubelet[3599]: W1124 00:11:51.519425 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.520070 kubelet[3599]: E1124 00:11:51.519437 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.520070 kubelet[3599]: E1124 00:11:51.519617 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.520070 kubelet[3599]: W1124 00:11:51.519627 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.520070 kubelet[3599]: E1124 00:11:51.519638 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.520070 kubelet[3599]: E1124 00:11:51.519911 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.520474 kubelet[3599]: W1124 00:11:51.519923 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.520474 kubelet[3599]: E1124 00:11:51.519938 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.522117 kubelet[3599]: E1124 00:11:51.522093 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.522117 kubelet[3599]: W1124 00:11:51.522116 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.522254 kubelet[3599]: E1124 00:11:51.522134 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.522378 kubelet[3599]: E1124 00:11:51.522361 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.522432 kubelet[3599]: W1124 00:11:51.522378 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.522432 kubelet[3599]: E1124 00:11:51.522392 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.522596 kubelet[3599]: E1124 00:11:51.522582 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.522649 kubelet[3599]: W1124 00:11:51.522598 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.522649 kubelet[3599]: E1124 00:11:51.522610 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.523436 kubelet[3599]: E1124 00:11:51.522918 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.523436 kubelet[3599]: W1124 00:11:51.522940 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.523436 kubelet[3599]: E1124 00:11:51.522952 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.523436 kubelet[3599]: E1124 00:11:51.523217 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.523436 kubelet[3599]: W1124 00:11:51.523228 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.523436 kubelet[3599]: E1124 00:11:51.523240 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.545731 kubelet[3599]: E1124 00:11:51.544807 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.545731 kubelet[3599]: W1124 00:11:51.545143 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.545731 kubelet[3599]: E1124 00:11:51.545221 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.545731 kubelet[3599]: I1124 00:11:51.545292 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/32cb229b-909c-49d5-aa91-1c2bceaac746-socket-dir\") pod \"csi-node-driver-l5ntz\" (UID: \"32cb229b-909c-49d5-aa91-1c2bceaac746\") " pod="calico-system/csi-node-driver-l5ntz" Nov 24 00:11:51.545731 kubelet[3599]: E1124 00:11:51.545625 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.545731 kubelet[3599]: W1124 00:11:51.545642 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.545731 kubelet[3599]: E1124 00:11:51.545684 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.545731 kubelet[3599]: I1124 00:11:51.545714 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqqk\" (UniqueName: \"kubernetes.io/projected/32cb229b-909c-49d5-aa91-1c2bceaac746-kube-api-access-7fqqk\") pod \"csi-node-driver-l5ntz\" (UID: \"32cb229b-909c-49d5-aa91-1c2bceaac746\") " pod="calico-system/csi-node-driver-l5ntz" Nov 24 00:11:51.546217 kubelet[3599]: E1124 00:11:51.546037 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.546217 kubelet[3599]: W1124 00:11:51.546051 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.546626 kubelet[3599]: E1124 00:11:51.546303 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.546626 kubelet[3599]: I1124 00:11:51.546335 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32cb229b-909c-49d5-aa91-1c2bceaac746-kubelet-dir\") pod \"csi-node-driver-l5ntz\" (UID: \"32cb229b-909c-49d5-aa91-1c2bceaac746\") " pod="calico-system/csi-node-driver-l5ntz" Nov 24 00:11:51.546626 kubelet[3599]: E1124 00:11:51.546265 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.546626 kubelet[3599]: W1124 00:11:51.546414 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.546626 kubelet[3599]: E1124 00:11:51.546429 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.547651 kubelet[3599]: E1124 00:11:51.546701 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.547651 kubelet[3599]: W1124 00:11:51.546712 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.547651 kubelet[3599]: E1124 00:11:51.546752 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.547651 kubelet[3599]: E1124 00:11:51.547049 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.547651 kubelet[3599]: W1124 00:11:51.547060 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.547651 kubelet[3599]: E1124 00:11:51.547074 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.550288 kubelet[3599]: E1124 00:11:51.550086 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.550288 kubelet[3599]: W1124 00:11:51.550109 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.550288 kubelet[3599]: E1124 00:11:51.550143 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.550288 kubelet[3599]: I1124 00:11:51.550179 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/32cb229b-909c-49d5-aa91-1c2bceaac746-registration-dir\") pod \"csi-node-driver-l5ntz\" (UID: \"32cb229b-909c-49d5-aa91-1c2bceaac746\") " pod="calico-system/csi-node-driver-l5ntz" Nov 24 00:11:51.551122 kubelet[3599]: E1124 00:11:51.550779 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.551122 kubelet[3599]: W1124 00:11:51.550831 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.551122 kubelet[3599]: E1124 00:11:51.550875 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.553496 kubelet[3599]: E1124 00:11:51.553357 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.553496 kubelet[3599]: W1124 00:11:51.553377 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.553496 kubelet[3599]: E1124 00:11:51.553399 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.555817 kubelet[3599]: E1124 00:11:51.555800 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.556704 kubelet[3599]: W1124 00:11:51.556678 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.557010 kubelet[3599]: E1124 00:11:51.556827 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.557243 systemd[1]: Started cri-containerd-c2c50bccd57a9c0f09db85ecf5f70c226a1c7cd1439267a0913ab310a256e00a.scope - libcontainer container c2c50bccd57a9c0f09db85ecf5f70c226a1c7cd1439267a0913ab310a256e00a. Nov 24 00:11:51.557528 kubelet[3599]: I1124 00:11:51.557293 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/32cb229b-909c-49d5-aa91-1c2bceaac746-varrun\") pod \"csi-node-driver-l5ntz\" (UID: \"32cb229b-909c-49d5-aa91-1c2bceaac746\") " pod="calico-system/csi-node-driver-l5ntz" Nov 24 00:11:51.558159 kubelet[3599]: E1124 00:11:51.558141 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.558276 kubelet[3599]: W1124 00:11:51.558241 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.558589 kubelet[3599]: E1124 00:11:51.558575 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.558668 kubelet[3599]: W1124 00:11:51.558656 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.558751 kubelet[3599]: E1124 00:11:51.558737 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.559096 kubelet[3599]: E1124 00:11:51.559029 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.559208 kubelet[3599]: E1124 00:11:51.559197 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.559304 kubelet[3599]: W1124 00:11:51.559283 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.559425 kubelet[3599]: E1124 00:11:51.559379 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.559766 kubelet[3599]: E1124 00:11:51.559751 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.561044 kubelet[3599]: W1124 00:11:51.560873 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.561044 kubelet[3599]: E1124 00:11:51.560897 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.562123 kubelet[3599]: E1124 00:11:51.562106 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.562240 kubelet[3599]: W1124 00:11:51.562225 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.562346 kubelet[3599]: E1124 00:11:51.562313 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.634087 containerd[1981]: time="2025-11-24T00:11:51.634027409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tjn4m,Uid:2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab,Namespace:calico-system,Attempt:0,}" Nov 24 00:11:51.656342 containerd[1981]: time="2025-11-24T00:11:51.656083928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c998bdd5d-5gvbv,Uid:bc7f41de-b5d6-463f-84da-86a7ed36bd6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2c50bccd57a9c0f09db85ecf5f70c226a1c7cd1439267a0913ab310a256e00a\"" Nov 24 00:11:51.659146 containerd[1981]: time="2025-11-24T00:11:51.659105116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 00:11:51.660060 kubelet[3599]: E1124 00:11:51.660035 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.660060 kubelet[3599]: W1124 00:11:51.660058 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.662294 kubelet[3599]: E1124 00:11:51.660097 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.662294 kubelet[3599]: E1124 00:11:51.661076 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.662294 kubelet[3599]: W1124 00:11:51.661093 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.662294 kubelet[3599]: E1124 00:11:51.661115 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.662940 kubelet[3599]: E1124 00:11:51.662914 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.662940 kubelet[3599]: W1124 00:11:51.662939 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.663088 kubelet[3599]: E1124 00:11:51.662975 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.663773 kubelet[3599]: E1124 00:11:51.663684 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.663773 kubelet[3599]: W1124 00:11:51.663706 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.664719 kubelet[3599]: E1124 00:11:51.663808 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.665144 kubelet[3599]: E1124 00:11:51.665092 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.665144 kubelet[3599]: W1124 00:11:51.665113 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.665346 kubelet[3599]: E1124 00:11:51.665275 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.665951 kubelet[3599]: E1124 00:11:51.665934 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.665951 kubelet[3599]: W1124 00:11:51.665951 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.669209 kubelet[3599]: E1124 00:11:51.669096 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.670995 kubelet[3599]: E1124 00:11:51.670965 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.670995 kubelet[3599]: W1124 00:11:51.670990 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.671134 kubelet[3599]: E1124 00:11:51.671083 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.671493 kubelet[3599]: E1124 00:11:51.671368 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.671493 kubelet[3599]: W1124 00:11:51.671384 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.671892 kubelet[3599]: E1124 00:11:51.671561 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.671892 kubelet[3599]: E1124 00:11:51.671791 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.671892 kubelet[3599]: W1124 00:11:51.671803 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.671892 kubelet[3599]: E1124 00:11:51.671882 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.672764 kubelet[3599]: E1124 00:11:51.672748 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.672764 kubelet[3599]: W1124 00:11:51.672765 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.673146 kubelet[3599]: E1124 00:11:51.672827 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.674909 kubelet[3599]: E1124 00:11:51.674889 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.674909 kubelet[3599]: W1124 00:11:51.674909 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.675282 kubelet[3599]: E1124 00:11:51.675038 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.675798 kubelet[3599]: E1124 00:11:51.675764 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.676177 kubelet[3599]: W1124 00:11:51.675985 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.676865 kubelet[3599]: E1124 00:11:51.676711 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.677063 kubelet[3599]: W1124 00:11:51.677045 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.678322 kubelet[3599]: E1124 00:11:51.678212 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.678322 kubelet[3599]: W1124 00:11:51.678227 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.681160 kubelet[3599]: E1124 00:11:51.681006 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.681160 kubelet[3599]: W1124 00:11:51.681024 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.682195 kubelet[3599]: E1124 00:11:51.681906 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.682195 kubelet[3599]: W1124 00:11:51.681919 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.682195 kubelet[3599]: E1124 00:11:51.681940 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.682569 kubelet[3599]: E1124 00:11:51.682553 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.682860 kubelet[3599]: W1124 00:11:51.682652 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.682860 kubelet[3599]: E1124 00:11:51.682673 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.684018 kubelet[3599]: E1124 00:11:51.684002 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.684507 kubelet[3599]: W1124 00:11:51.684475 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.684763 kubelet[3599]: E1124 00:11:51.684590 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.684963 kubelet[3599]: E1124 00:11:51.684334 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.685247 kubelet[3599]: E1124 00:11:51.684365 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.685247 kubelet[3599]: E1124 00:11:51.684344 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.686376 kubelet[3599]: E1124 00:11:51.686193 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.686376 kubelet[3599]: W1124 00:11:51.686209 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.686376 kubelet[3599]: E1124 00:11:51.686229 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.686558 kubelet[3599]: E1124 00:11:51.684356 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.687197 kubelet[3599]: E1124 00:11:51.687054 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.687197 kubelet[3599]: W1124 00:11:51.687069 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.687197 kubelet[3599]: E1124 00:11:51.687087 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.689100 kubelet[3599]: E1124 00:11:51.689062 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.689100 kubelet[3599]: W1124 00:11:51.689076 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.689303 containerd[1981]: time="2025-11-24T00:11:51.689134866Z" level=info msg="connecting to shim b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648" address="unix:///run/containerd/s/8f6b9bfc541412ca40bfbc7a1d27c3c650edbae45cc61fb904381da96bded48d" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:11:51.689519 kubelet[3599]: E1124 00:11:51.689418 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.690866 kubelet[3599]: E1124 00:11:51.689978 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.691042 kubelet[3599]: W1124 00:11:51.690954 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.691042 kubelet[3599]: E1124 00:11:51.690993 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.691496 kubelet[3599]: E1124 00:11:51.691461 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.691496 kubelet[3599]: W1124 00:11:51.691476 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.691696 kubelet[3599]: E1124 00:11:51.691681 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.692913 kubelet[3599]: E1124 00:11:51.692878 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.692913 kubelet[3599]: W1124 00:11:51.692894 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.693091 kubelet[3599]: E1124 00:11:51.693044 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.693694 kubelet[3599]: E1124 00:11:51.693415 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.693694 kubelet[3599]: W1124 00:11:51.693429 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.693694 kubelet[3599]: E1124 00:11:51.693444 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.723869 kubelet[3599]: E1124 00:11:51.721964 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:51.723869 kubelet[3599]: W1124 00:11:51.721992 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:51.723869 kubelet[3599]: E1124 00:11:51.722019 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:51.756025 systemd[1]: Started cri-containerd-b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648.scope - libcontainer container b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648. Nov 24 00:11:51.821337 containerd[1981]: time="2025-11-24T00:11:51.821304784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tjn4m,Uid:2f4bb7eb-173c-46cc-9ccb-24d84f90c9ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648\"" Nov 24 00:11:53.152006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169042494.mount: Deactivated successfully. Nov 24 00:11:53.598740 kubelet[3599]: E1124 00:11:53.598621 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:11:54.576431 containerd[1981]: time="2025-11-24T00:11:54.576241036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:54.578271 containerd[1981]: time="2025-11-24T00:11:54.578051353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 00:11:54.578889 containerd[1981]: time="2025-11-24T00:11:54.578827956Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:54.587198 containerd[1981]: time="2025-11-24T00:11:54.587099615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:54.588773 containerd[1981]: time="2025-11-24T00:11:54.588550501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.929396708s" Nov 24 00:11:54.588773 containerd[1981]: time="2025-11-24T00:11:54.588595864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 00:11:54.590356 containerd[1981]: time="2025-11-24T00:11:54.590327048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 00:11:54.656495 containerd[1981]: time="2025-11-24T00:11:54.656173840Z" level=info msg="CreateContainer within sandbox \"c2c50bccd57a9c0f09db85ecf5f70c226a1c7cd1439267a0913ab310a256e00a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 00:11:54.673228 containerd[1981]: time="2025-11-24T00:11:54.670799984Z" level=info msg="Container 3e179f45095d869b5ba95fbf10561aab880aed9339a070ba7e15b60f79c22afa: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:11:54.686940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020839795.mount: Deactivated successfully. Nov 24 00:11:54.717186 containerd[1981]: time="2025-11-24T00:11:54.717138581Z" level=info msg="CreateContainer within sandbox \"c2c50bccd57a9c0f09db85ecf5f70c226a1c7cd1439267a0913ab310a256e00a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3e179f45095d869b5ba95fbf10561aab880aed9339a070ba7e15b60f79c22afa\"" Nov 24 00:11:54.717978 containerd[1981]: time="2025-11-24T00:11:54.717767441Z" level=info msg="StartContainer for \"3e179f45095d869b5ba95fbf10561aab880aed9339a070ba7e15b60f79c22afa\"" Nov 24 00:11:54.719814 containerd[1981]: time="2025-11-24T00:11:54.719757717Z" level=info msg="connecting to shim 3e179f45095d869b5ba95fbf10561aab880aed9339a070ba7e15b60f79c22afa" address="unix:///run/containerd/s/8d4a0a64681ebc241911519f2513e94c06c6299c161a194789c93ee3d57dd6c7" protocol=ttrpc version=3 Nov 24 00:11:54.789075 systemd[1]: Started cri-containerd-3e179f45095d869b5ba95fbf10561aab880aed9339a070ba7e15b60f79c22afa.scope - libcontainer container 3e179f45095d869b5ba95fbf10561aab880aed9339a070ba7e15b60f79c22afa. Nov 24 00:11:54.866411 containerd[1981]: time="2025-11-24T00:11:54.866281806Z" level=info msg="StartContainer for \"3e179f45095d869b5ba95fbf10561aab880aed9339a070ba7e15b60f79c22afa\" returns successfully" Nov 24 00:11:55.567428 kubelet[3599]: E1124 00:11:55.567350 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:11:55.870992 kubelet[3599]: E1124 00:11:55.869832 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.870992 kubelet[3599]: W1124 00:11:55.870925 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.870992 kubelet[3599]: E1124 00:11:55.870956 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.874873 kubelet[3599]: E1124 00:11:55.872008 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.874873 kubelet[3599]: W1124 00:11:55.872030 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.874873 kubelet[3599]: E1124 00:11:55.872068 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.875256 kubelet[3599]: E1124 00:11:55.875232 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.875497 kubelet[3599]: W1124 00:11:55.875466 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.875553 kubelet[3599]: E1124 00:11:55.875504 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.876035 kubelet[3599]: E1124 00:11:55.876016 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.876641 kubelet[3599]: W1124 00:11:55.876460 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.876641 kubelet[3599]: E1124 00:11:55.876492 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.876930 kubelet[3599]: E1124 00:11:55.876914 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.876998 kubelet[3599]: W1124 00:11:55.876931 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.876998 kubelet[3599]: E1124 00:11:55.876951 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.877349 kubelet[3599]: E1124 00:11:55.877328 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.877423 kubelet[3599]: W1124 00:11:55.877350 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.877423 kubelet[3599]: E1124 00:11:55.877367 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.877745 kubelet[3599]: E1124 00:11:55.877727 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.877745 kubelet[3599]: W1124 00:11:55.877745 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.877886 kubelet[3599]: E1124 00:11:55.877762 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.878227 kubelet[3599]: E1124 00:11:55.878140 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.878227 kubelet[3599]: W1124 00:11:55.878155 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.878227 kubelet[3599]: E1124 00:11:55.878169 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.878698 kubelet[3599]: E1124 00:11:55.878677 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.878698 kubelet[3599]: W1124 00:11:55.878694 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.878814 kubelet[3599]: E1124 00:11:55.878709 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.879029 kubelet[3599]: E1124 00:11:55.879010 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.879029 kubelet[3599]: W1124 00:11:55.879026 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.879136 kubelet[3599]: E1124 00:11:55.879041 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.879770 kubelet[3599]: E1124 00:11:55.879586 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.879770 kubelet[3599]: W1124 00:11:55.879605 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.879770 kubelet[3599]: E1124 00:11:55.879619 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.879967 kubelet[3599]: E1124 00:11:55.879925 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.879967 kubelet[3599]: W1124 00:11:55.879936 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.879967 kubelet[3599]: E1124 00:11:55.879950 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.880225 kubelet[3599]: E1124 00:11:55.880173 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.880225 kubelet[3599]: W1124 00:11:55.880188 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.880225 kubelet[3599]: E1124 00:11:55.880200 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.880856 kubelet[3599]: E1124 00:11:55.880389 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.880856 kubelet[3599]: W1124 00:11:55.880400 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.880856 kubelet[3599]: E1124 00:11:55.880412 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.880856 kubelet[3599]: E1124 00:11:55.880579 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.880856 kubelet[3599]: W1124 00:11:55.880588 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.880856 kubelet[3599]: E1124 00:11:55.880598 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.916560 kubelet[3599]: E1124 00:11:55.916527 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.916560 kubelet[3599]: W1124 00:11:55.916556 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.916837 kubelet[3599]: E1124 00:11:55.916580 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.917576 kubelet[3599]: E1124 00:11:55.917549 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.917679 kubelet[3599]: W1124 00:11:55.917607 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.917959 kubelet[3599]: E1124 00:11:55.917927 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.918283 kubelet[3599]: E1124 00:11:55.918224 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.918283 kubelet[3599]: W1124 00:11:55.918241 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.918651 kubelet[3599]: E1124 00:11:55.918618 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.919117 kubelet[3599]: E1124 00:11:55.918886 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.919117 kubelet[3599]: W1124 00:11:55.918900 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.919117 kubelet[3599]: E1124 00:11:55.918916 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.919305 kubelet[3599]: E1124 00:11:55.919282 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.919305 kubelet[3599]: W1124 00:11:55.919300 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.919462 kubelet[3599]: E1124 00:11:55.919444 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.920522 kubelet[3599]: E1124 00:11:55.920463 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.920599 kubelet[3599]: W1124 00:11:55.920523 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.920599 kubelet[3599]: E1124 00:11:55.920589 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.921211 kubelet[3599]: E1124 00:11:55.921060 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.921211 kubelet[3599]: W1124 00:11:55.921075 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.921211 kubelet[3599]: E1124 00:11:55.921158 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.921550 kubelet[3599]: E1124 00:11:55.921533 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.921618 kubelet[3599]: W1124 00:11:55.921551 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.921663 kubelet[3599]: E1124 00:11:55.921637 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.922135 kubelet[3599]: E1124 00:11:55.922089 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.922135 kubelet[3599]: W1124 00:11:55.922124 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.922255 kubelet[3599]: E1124 00:11:55.922208 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.922593 kubelet[3599]: E1124 00:11:55.922576 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.922593 kubelet[3599]: W1124 00:11:55.922591 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.922807 kubelet[3599]: E1124 00:11:55.922688 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.923761 kubelet[3599]: E1124 00:11:55.923742 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.923831 kubelet[3599]: W1124 00:11:55.923763 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.923903 kubelet[3599]: E1124 00:11:55.923870 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.924372 kubelet[3599]: E1124 00:11:55.924059 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.924372 kubelet[3599]: W1124 00:11:55.924071 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.924372 kubelet[3599]: E1124 00:11:55.924151 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.924662 kubelet[3599]: E1124 00:11:55.924383 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.924662 kubelet[3599]: W1124 00:11:55.924394 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.924662 kubelet[3599]: E1124 00:11:55.924422 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.925370 kubelet[3599]: E1124 00:11:55.924693 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.925370 kubelet[3599]: W1124 00:11:55.924704 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.925370 kubelet[3599]: E1124 00:11:55.924970 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.925679 kubelet[3599]: E1124 00:11:55.925660 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.925679 kubelet[3599]: W1124 00:11:55.925677 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.926017 kubelet[3599]: E1124 00:11:55.925727 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.926671 kubelet[3599]: E1124 00:11:55.926633 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.926671 kubelet[3599]: W1124 00:11:55.926652 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.926671 kubelet[3599]: E1124 00:11:55.926668 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.928512 kubelet[3599]: E1124 00:11:55.928441 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.928512 kubelet[3599]: W1124 00:11:55.928460 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.928943 kubelet[3599]: E1124 00:11:55.928626 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.928943 kubelet[3599]: E1124 00:11:55.928939 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 00:11:55.929024 kubelet[3599]: W1124 00:11:55.928953 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 00:11:55.929024 kubelet[3599]: E1124 00:11:55.928968 3599 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 00:11:55.959251 containerd[1981]: time="2025-11-24T00:11:55.958336292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:55.963447 containerd[1981]: time="2025-11-24T00:11:55.962796068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 00:11:55.964644 containerd[1981]: time="2025-11-24T00:11:55.964602907Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:55.970644 containerd[1981]: time="2025-11-24T00:11:55.969718164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:11:55.970644 containerd[1981]: time="2025-11-24T00:11:55.970427691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.379837463s" Nov 24 00:11:55.970644 containerd[1981]: time="2025-11-24T00:11:55.970498845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 00:11:55.973372 containerd[1981]: time="2025-11-24T00:11:55.973333512Z" level=info msg="CreateContainer within sandbox \"b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 00:11:55.988911 containerd[1981]: time="2025-11-24T00:11:55.986801459Z" level=info msg="Container 62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:11:56.024091 containerd[1981]: time="2025-11-24T00:11:56.024026119Z" level=info msg="CreateContainer within sandbox \"b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf\"" Nov 24 00:11:56.027519 containerd[1981]: time="2025-11-24T00:11:56.027448025Z" level=info msg="StartContainer for \"62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf\"" Nov 24 00:11:56.076712 containerd[1981]: time="2025-11-24T00:11:56.076659038Z" level=info msg="connecting to shim 62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf" address="unix:///run/containerd/s/8f6b9bfc541412ca40bfbc7a1d27c3c650edbae45cc61fb904381da96bded48d" protocol=ttrpc version=3 Nov 24 00:11:56.110591 systemd[1]: Started cri-containerd-62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf.scope - libcontainer container 62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf. Nov 24 00:11:56.226321 containerd[1981]: time="2025-11-24T00:11:56.226249963Z" level=info msg="StartContainer for \"62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf\" returns successfully" Nov 24 00:11:56.239602 systemd[1]: cri-containerd-62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf.scope: Deactivated successfully. Nov 24 00:11:56.245297 containerd[1981]: time="2025-11-24T00:11:56.245245420Z" level=info msg="received container exit event container_id:\"62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf\" id:\"62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf\" pid:4312 exited_at:{seconds:1763943116 nanos:244803740}" Nov 24 00:11:56.279159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62f7e0f9d8bc3926907b7faac53312b759316657ed5e11f4daaf5f24817013cf-rootfs.mount: Deactivated successfully. Nov 24 00:11:56.830334 kubelet[3599]: I1124 00:11:56.830300 3599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:11:56.834536 containerd[1981]: time="2025-11-24T00:11:56.832417354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 00:11:56.887884 kubelet[3599]: I1124 00:11:56.887743 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c998bdd5d-5gvbv" podStartSLOduration=2.955920962 podStartE2EDuration="5.887708822s" podCreationTimestamp="2025-11-24 00:11:51 +0000 UTC" firstStartedPulling="2025-11-24 00:11:51.658392991 +0000 UTC m=+25.252193778" lastFinishedPulling="2025-11-24 00:11:54.590180847 +0000 UTC m=+28.183981638" observedRunningTime="2025-11-24 00:11:55.83768993 +0000 UTC m=+29.431490727" watchObservedRunningTime="2025-11-24 00:11:56.887708822 +0000 UTC m=+30.481509624" Nov 24 00:11:57.569313 kubelet[3599]: E1124 00:11:57.569257 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:11:59.567374 kubelet[3599]: E1124 00:11:59.567216 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:12:01.574946 kubelet[3599]: E1124 00:12:01.572301 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:12:03.567640 kubelet[3599]: E1124 00:12:03.567267 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:12:04.719250 containerd[1981]: time="2025-11-24T00:12:04.719143047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:12:04.725226 containerd[1981]: time="2025-11-24T00:12:04.725009348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 00:12:04.725226 containerd[1981]: time="2025-11-24T00:12:04.725164721Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:12:04.732972 containerd[1981]: time="2025-11-24T00:12:04.732612299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:12:04.734533 containerd[1981]: time="2025-11-24T00:12:04.734460209Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.901993586s" Nov 24 00:12:04.734533 containerd[1981]: time="2025-11-24T00:12:04.734509833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 00:12:04.983829 containerd[1981]: time="2025-11-24T00:12:04.983359938Z" level=info msg="CreateContainer within sandbox \"b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 00:12:05.027444 containerd[1981]: time="2025-11-24T00:12:05.017120967Z" level=info msg="Container 1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:12:05.052229 containerd[1981]: time="2025-11-24T00:12:05.052162842Z" level=info msg="CreateContainer within sandbox \"b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac\"" Nov 24 00:12:05.065630 containerd[1981]: time="2025-11-24T00:12:05.065418723Z" level=info msg="StartContainer for \"1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac\"" Nov 24 00:12:05.075137 containerd[1981]: time="2025-11-24T00:12:05.075067053Z" level=info msg="connecting to shim 1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac" address="unix:///run/containerd/s/8f6b9bfc541412ca40bfbc7a1d27c3c650edbae45cc61fb904381da96bded48d" protocol=ttrpc version=3 Nov 24 00:12:05.172158 systemd[1]: Started cri-containerd-1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac.scope - libcontainer container 1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac. Nov 24 00:12:05.507357 containerd[1981]: time="2025-11-24T00:12:05.507032314Z" level=info msg="StartContainer for \"1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac\" returns successfully" Nov 24 00:12:05.568539 kubelet[3599]: E1124 00:12:05.568042 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:12:07.057831 systemd[1]: cri-containerd-1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac.scope: Deactivated successfully. Nov 24 00:12:07.058198 systemd[1]: cri-containerd-1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac.scope: Consumed 754ms CPU time, 171.1M memory peak, 11.6M read from disk, 171.3M written to disk. Nov 24 00:12:07.117997 containerd[1981]: time="2025-11-24T00:12:07.117765799Z" level=info msg="received container exit event container_id:\"1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac\" id:\"1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac\" pid:4374 exited_at:{seconds:1763943127 nanos:117526089}" Nov 24 00:12:07.169762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f3829058e19ee20fab9438ebd99fca9518ce28afb11d9b817adcaea720ffaac-rootfs.mount: Deactivated successfully. Nov 24 00:12:07.197495 kubelet[3599]: I1124 00:12:07.197449 3599 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 24 00:12:07.367830 systemd[1]: Created slice kubepods-burstable-pod4f493023_ee80_40fd_b330_73391b1466e0.slice - libcontainer container kubepods-burstable-pod4f493023_ee80_40fd_b330_73391b1466e0.slice. Nov 24 00:12:07.386449 kubelet[3599]: I1124 00:12:07.386414 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzbn\" (UniqueName: \"kubernetes.io/projected/34188177-1cb2-4f9a-a0df-59150fa93682-kube-api-access-gtzbn\") pod \"whisker-58dfc98864-s8hpf\" (UID: \"34188177-1cb2-4f9a-a0df-59150fa93682\") " pod="calico-system/whisker-58dfc98864-s8hpf" Nov 24 00:12:07.387868 kubelet[3599]: I1124 00:12:07.386692 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34188177-1cb2-4f9a-a0df-59150fa93682-whisker-backend-key-pair\") pod \"whisker-58dfc98864-s8hpf\" (UID: \"34188177-1cb2-4f9a-a0df-59150fa93682\") " pod="calico-system/whisker-58dfc98864-s8hpf" Nov 24 00:12:07.387868 kubelet[3599]: I1124 00:12:07.386739 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34188177-1cb2-4f9a-a0df-59150fa93682-whisker-ca-bundle\") pod \"whisker-58dfc98864-s8hpf\" (UID: \"34188177-1cb2-4f9a-a0df-59150fa93682\") " pod="calico-system/whisker-58dfc98864-s8hpf" Nov 24 00:12:07.387868 kubelet[3599]: I1124 00:12:07.386775 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b66r\" (UniqueName: \"kubernetes.io/projected/4f493023-ee80-40fd-b330-73391b1466e0-kube-api-access-7b66r\") pod \"coredns-668d6bf9bc-zl56c\" (UID: \"4f493023-ee80-40fd-b330-73391b1466e0\") " pod="kube-system/coredns-668d6bf9bc-zl56c" Nov 24 00:12:07.387868 kubelet[3599]: I1124 00:12:07.386813 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck8tq\" (UniqueName: \"kubernetes.io/projected/9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655-kube-api-access-ck8tq\") pod \"calico-apiserver-68bdc98bdb-v9btm\" (UID: \"9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655\") " pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" Nov 24 00:12:07.387868 kubelet[3599]: I1124 00:12:07.386867 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f493023-ee80-40fd-b330-73391b1466e0-config-volume\") pod \"coredns-668d6bf9bc-zl56c\" (UID: \"4f493023-ee80-40fd-b330-73391b1466e0\") " pod="kube-system/coredns-668d6bf9bc-zl56c" Nov 24 00:12:07.392079 kubelet[3599]: I1124 00:12:07.386894 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655-calico-apiserver-certs\") pod \"calico-apiserver-68bdc98bdb-v9btm\" (UID: \"9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655\") " pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" Nov 24 00:12:07.422470 systemd[1]: Created slice kubepods-besteffort-pod9bb7a377_4ecd_4dcf_a90a_e0e0f9c65655.slice - libcontainer container kubepods-besteffort-pod9bb7a377_4ecd_4dcf_a90a_e0e0f9c65655.slice. Nov 24 00:12:07.438813 systemd[1]: Created slice kubepods-besteffort-pod34188177_1cb2_4f9a_a0df_59150fa93682.slice - libcontainer container kubepods-besteffort-pod34188177_1cb2_4f9a_a0df_59150fa93682.slice. Nov 24 00:12:07.449253 systemd[1]: Created slice kubepods-besteffort-podabebab1e_f092_4a6b_94e1_1c92a233e08a.slice - libcontainer container kubepods-besteffort-podabebab1e_f092_4a6b_94e1_1c92a233e08a.slice. Nov 24 00:12:07.466562 systemd[1]: Created slice kubepods-burstable-pod133e718f_e16a_471d_9832_196325dfbc53.slice - libcontainer container kubepods-burstable-pod133e718f_e16a_471d_9832_196325dfbc53.slice. Nov 24 00:12:07.475352 systemd[1]: Created slice kubepods-besteffort-pod9441d7ab_9ca0_4aa4_8c69_0bae216edd81.slice - libcontainer container kubepods-besteffort-pod9441d7ab_9ca0_4aa4_8c69_0bae216edd81.slice. Nov 24 00:12:07.485318 systemd[1]: Created slice kubepods-besteffort-pod02dedcc0_cbf6_46e5_bf8e_d29b3313eb81.slice - libcontainer container kubepods-besteffort-pod02dedcc0_cbf6_46e5_bf8e_d29b3313eb81.slice. Nov 24 00:12:07.489491 kubelet[3599]: I1124 00:12:07.489452 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9441d7ab-9ca0-4aa4-8c69-0bae216edd81-goldmane-key-pair\") pod \"goldmane-666569f655-9xvvr\" (UID: \"9441d7ab-9ca0-4aa4-8c69-0bae216edd81\") " pod="calico-system/goldmane-666569f655-9xvvr" Nov 24 00:12:07.489633 kubelet[3599]: I1124 00:12:07.489548 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/133e718f-e16a-471d-9832-196325dfbc53-config-volume\") pod \"coredns-668d6bf9bc-7cv4s\" (UID: \"133e718f-e16a-471d-9832-196325dfbc53\") " pod="kube-system/coredns-668d6bf9bc-7cv4s" Nov 24 00:12:07.489694 kubelet[3599]: I1124 00:12:07.489629 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02dedcc0-cbf6-46e5-bf8e-d29b3313eb81-tigera-ca-bundle\") pod \"calico-kube-controllers-9bb64f948-hbf2v\" (UID: \"02dedcc0-cbf6-46e5-bf8e-d29b3313eb81\") " pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" Nov 24 00:12:07.489694 kubelet[3599]: I1124 00:12:07.489655 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-646wl\" (UniqueName: \"kubernetes.io/projected/133e718f-e16a-471d-9832-196325dfbc53-kube-api-access-646wl\") pod \"coredns-668d6bf9bc-7cv4s\" (UID: \"133e718f-e16a-471d-9832-196325dfbc53\") " pod="kube-system/coredns-668d6bf9bc-7cv4s" Nov 24 00:12:07.490893 kubelet[3599]: I1124 00:12:07.489699 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9441d7ab-9ca0-4aa4-8c69-0bae216edd81-config\") pod \"goldmane-666569f655-9xvvr\" (UID: \"9441d7ab-9ca0-4aa4-8c69-0bae216edd81\") " pod="calico-system/goldmane-666569f655-9xvvr" Nov 24 00:12:07.490893 kubelet[3599]: I1124 00:12:07.489781 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9441d7ab-9ca0-4aa4-8c69-0bae216edd81-goldmane-ca-bundle\") pod \"goldmane-666569f655-9xvvr\" (UID: \"9441d7ab-9ca0-4aa4-8c69-0bae216edd81\") " pod="calico-system/goldmane-666569f655-9xvvr" Nov 24 00:12:07.490893 kubelet[3599]: I1124 00:12:07.490040 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvfx5\" (UniqueName: \"kubernetes.io/projected/02dedcc0-cbf6-46e5-bf8e-d29b3313eb81-kube-api-access-bvfx5\") pod \"calico-kube-controllers-9bb64f948-hbf2v\" (UID: \"02dedcc0-cbf6-46e5-bf8e-d29b3313eb81\") " pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" Nov 24 00:12:07.490893 kubelet[3599]: I1124 00:12:07.490891 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-285p8\" (UniqueName: \"kubernetes.io/projected/9441d7ab-9ca0-4aa4-8c69-0bae216edd81-kube-api-access-285p8\") pod \"goldmane-666569f655-9xvvr\" (UID: \"9441d7ab-9ca0-4aa4-8c69-0bae216edd81\") " pod="calico-system/goldmane-666569f655-9xvvr" Nov 24 00:12:07.491320 kubelet[3599]: I1124 00:12:07.491294 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abebab1e-f092-4a6b-94e1-1c92a233e08a-calico-apiserver-certs\") pod \"calico-apiserver-68bdc98bdb-jnjxv\" (UID: \"abebab1e-f092-4a6b-94e1-1c92a233e08a\") " pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" Nov 24 00:12:07.493165 kubelet[3599]: I1124 00:12:07.491456 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kngkq\" (UniqueName: \"kubernetes.io/projected/abebab1e-f092-4a6b-94e1-1c92a233e08a-kube-api-access-kngkq\") pod \"calico-apiserver-68bdc98bdb-jnjxv\" (UID: \"abebab1e-f092-4a6b-94e1-1c92a233e08a\") " pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" Nov 24 00:12:07.583670 systemd[1]: Created slice kubepods-besteffort-pod32cb229b_909c_49d5_aa91_1c2bceaac746.slice - libcontainer container kubepods-besteffort-pod32cb229b_909c_49d5_aa91_1c2bceaac746.slice. Nov 24 00:12:07.589266 containerd[1981]: time="2025-11-24T00:12:07.589225106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5ntz,Uid:32cb229b-909c-49d5-aa91-1c2bceaac746,Namespace:calico-system,Attempt:0,}" Nov 24 00:12:07.710632 containerd[1981]: time="2025-11-24T00:12:07.710589485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl56c,Uid:4f493023-ee80-40fd-b330-73391b1466e0,Namespace:kube-system,Attempt:0,}" Nov 24 00:12:07.733338 containerd[1981]: time="2025-11-24T00:12:07.733267885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bdc98bdb-v9btm,Uid:9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:12:07.745875 containerd[1981]: time="2025-11-24T00:12:07.745644149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58dfc98864-s8hpf,Uid:34188177-1cb2-4f9a-a0df-59150fa93682,Namespace:calico-system,Attempt:0,}" Nov 24 00:12:07.764097 containerd[1981]: time="2025-11-24T00:12:07.763963916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bdc98bdb-jnjxv,Uid:abebab1e-f092-4a6b-94e1-1c92a233e08a,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:12:07.778301 containerd[1981]: time="2025-11-24T00:12:07.778251496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7cv4s,Uid:133e718f-e16a-471d-9832-196325dfbc53,Namespace:kube-system,Attempt:0,}" Nov 24 00:12:07.782388 containerd[1981]: time="2025-11-24T00:12:07.782334864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9xvvr,Uid:9441d7ab-9ca0-4aa4-8c69-0bae216edd81,Namespace:calico-system,Attempt:0,}" Nov 24 00:12:07.792642 containerd[1981]: time="2025-11-24T00:12:07.792593105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9bb64f948-hbf2v,Uid:02dedcc0-cbf6-46e5-bf8e-d29b3313eb81,Namespace:calico-system,Attempt:0,}" Nov 24 00:12:08.037543 containerd[1981]: time="2025-11-24T00:12:08.037364035Z" level=error msg="Failed to destroy network for sandbox \"c849fbdfcb0b4aba465cfcb8f6bde93147ab8f47f564b0b7fb3b3bc227981cfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.043308 containerd[1981]: time="2025-11-24T00:12:08.042678061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 00:12:08.101795 containerd[1981]: time="2025-11-24T00:12:08.048824178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9bb64f948-hbf2v,Uid:02dedcc0-cbf6-46e5-bf8e-d29b3313eb81,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c849fbdfcb0b4aba465cfcb8f6bde93147ab8f47f564b0b7fb3b3bc227981cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.110392 kubelet[3599]: E1124 00:12:08.110321 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c849fbdfcb0b4aba465cfcb8f6bde93147ab8f47f564b0b7fb3b3bc227981cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.113079 kubelet[3599]: E1124 00:12:08.111933 3599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c849fbdfcb0b4aba465cfcb8f6bde93147ab8f47f564b0b7fb3b3bc227981cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" Nov 24 00:12:08.113079 kubelet[3599]: E1124 00:12:08.111982 3599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c849fbdfcb0b4aba465cfcb8f6bde93147ab8f47f564b0b7fb3b3bc227981cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" Nov 24 00:12:08.113079 kubelet[3599]: E1124 00:12:08.112064 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9bb64f948-hbf2v_calico-system(02dedcc0-cbf6-46e5-bf8e-d29b3313eb81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9bb64f948-hbf2v_calico-system(02dedcc0-cbf6-46e5-bf8e-d29b3313eb81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c849fbdfcb0b4aba465cfcb8f6bde93147ab8f47f564b0b7fb3b3bc227981cfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:12:08.175172 containerd[1981]: time="2025-11-24T00:12:08.175030373Z" level=error msg="Failed to destroy network for sandbox \"278f1c8be2d28bf170e290996ca9715cae2b230b49211a91e97f1bb67aee9c79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.181012 containerd[1981]: time="2025-11-24T00:12:08.180363519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl56c,Uid:4f493023-ee80-40fd-b330-73391b1466e0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"278f1c8be2d28bf170e290996ca9715cae2b230b49211a91e97f1bb67aee9c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.183452 kubelet[3599]: E1124 00:12:08.183405 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"278f1c8be2d28bf170e290996ca9715cae2b230b49211a91e97f1bb67aee9c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.183913 kubelet[3599]: E1124 00:12:08.183651 3599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"278f1c8be2d28bf170e290996ca9715cae2b230b49211a91e97f1bb67aee9c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zl56c" Nov 24 00:12:08.183913 kubelet[3599]: E1124 00:12:08.183707 3599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"278f1c8be2d28bf170e290996ca9715cae2b230b49211a91e97f1bb67aee9c79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zl56c" Nov 24 00:12:08.184951 kubelet[3599]: E1124 00:12:08.184154 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zl56c_kube-system(4f493023-ee80-40fd-b330-73391b1466e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zl56c_kube-system(4f493023-ee80-40fd-b330-73391b1466e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"278f1c8be2d28bf170e290996ca9715cae2b230b49211a91e97f1bb67aee9c79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zl56c" podUID="4f493023-ee80-40fd-b330-73391b1466e0" Nov 24 00:12:08.210103 systemd[1]: run-netns-cni\x2d8c3780e5\x2dcde2\x2da489\x2d2ac8\x2dcec7f72da1a6.mount: Deactivated successfully. Nov 24 00:12:08.222153 containerd[1981]: time="2025-11-24T00:12:08.221398754Z" level=error msg="Failed to destroy network for sandbox \"4d55cd8d2087269890bb5d2d91c259bf76c67d115540685c11ba24d6f6f156a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.226806 systemd[1]: run-netns-cni\x2d4b106f74\x2d2fba\x2dacc7\x2debb1\x2d88ffe2b36037.mount: Deactivated successfully. Nov 24 00:12:08.227312 containerd[1981]: time="2025-11-24T00:12:08.227147338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bdc98bdb-v9btm,Uid:9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d55cd8d2087269890bb5d2d91c259bf76c67d115540685c11ba24d6f6f156a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.232019 kubelet[3599]: E1124 00:12:08.229785 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d55cd8d2087269890bb5d2d91c259bf76c67d115540685c11ba24d6f6f156a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.232019 kubelet[3599]: E1124 00:12:08.229871 3599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d55cd8d2087269890bb5d2d91c259bf76c67d115540685c11ba24d6f6f156a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" Nov 24 00:12:08.232019 kubelet[3599]: E1124 00:12:08.229906 3599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d55cd8d2087269890bb5d2d91c259bf76c67d115540685c11ba24d6f6f156a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" Nov 24 00:12:08.232573 containerd[1981]: time="2025-11-24T00:12:08.229968274Z" level=error msg="Failed to destroy network for sandbox \"1a5f10ca7c08ede13464efd84a01c19034a184855570994efb26e3cbbade1e5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.232638 kubelet[3599]: E1124 00:12:08.229958 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68bdc98bdb-v9btm_calico-apiserver(9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68bdc98bdb-v9btm_calico-apiserver(9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d55cd8d2087269890bb5d2d91c259bf76c67d115540685c11ba24d6f6f156a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:12:08.236384 containerd[1981]: time="2025-11-24T00:12:08.236328035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7cv4s,Uid:133e718f-e16a-471d-9832-196325dfbc53,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5f10ca7c08ede13464efd84a01c19034a184855570994efb26e3cbbade1e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.237373 kubelet[3599]: E1124 00:12:08.236622 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5f10ca7c08ede13464efd84a01c19034a184855570994efb26e3cbbade1e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.237373 kubelet[3599]: E1124 00:12:08.236701 3599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5f10ca7c08ede13464efd84a01c19034a184855570994efb26e3cbbade1e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7cv4s" Nov 24 00:12:08.237373 kubelet[3599]: E1124 00:12:08.236729 3599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a5f10ca7c08ede13464efd84a01c19034a184855570994efb26e3cbbade1e5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7cv4s" Nov 24 00:12:08.237083 systemd[1]: run-netns-cni\x2db87bf5e5\x2d2311\x2d759c\x2d40c3\x2d4604282e710c.mount: Deactivated successfully. Nov 24 00:12:08.237697 kubelet[3599]: E1124 00:12:08.236786 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7cv4s_kube-system(133e718f-e16a-471d-9832-196325dfbc53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7cv4s_kube-system(133e718f-e16a-471d-9832-196325dfbc53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a5f10ca7c08ede13464efd84a01c19034a184855570994efb26e3cbbade1e5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7cv4s" podUID="133e718f-e16a-471d-9832-196325dfbc53" Nov 24 00:12:08.249308 containerd[1981]: time="2025-11-24T00:12:08.247341049Z" level=error msg="Failed to destroy network for sandbox \"c495236b8407715359a7f88e33f8273c09d7d63cea25c7dada7916d0fbf57403\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.252447 containerd[1981]: time="2025-11-24T00:12:08.250995929Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bdc98bdb-jnjxv,Uid:abebab1e-f092-4a6b-94e1-1c92a233e08a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c495236b8407715359a7f88e33f8273c09d7d63cea25c7dada7916d0fbf57403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.252059 systemd[1]: run-netns-cni\x2d13a8d74b\x2d1841\x2db375\x2db595\x2d9acab2effd6f.mount: Deactivated successfully. Nov 24 00:12:08.254862 kubelet[3599]: E1124 00:12:08.253217 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c495236b8407715359a7f88e33f8273c09d7d63cea25c7dada7916d0fbf57403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.254862 kubelet[3599]: E1124 00:12:08.253289 3599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c495236b8407715359a7f88e33f8273c09d7d63cea25c7dada7916d0fbf57403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" Nov 24 00:12:08.254862 kubelet[3599]: E1124 00:12:08.253315 3599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c495236b8407715359a7f88e33f8273c09d7d63cea25c7dada7916d0fbf57403\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" Nov 24 00:12:08.255084 kubelet[3599]: E1124 00:12:08.253362 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68bdc98bdb-jnjxv_calico-apiserver(abebab1e-f092-4a6b-94e1-1c92a233e08a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68bdc98bdb-jnjxv_calico-apiserver(abebab1e-f092-4a6b-94e1-1c92a233e08a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c495236b8407715359a7f88e33f8273c09d7d63cea25c7dada7916d0fbf57403\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:12:08.265578 containerd[1981]: time="2025-11-24T00:12:08.265310632Z" level=error msg="Failed to destroy network for sandbox \"17f97c4febe41ebf12587a4772f315170377f5024ce530b965a024e01a483ff9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.269345 containerd[1981]: time="2025-11-24T00:12:08.269285485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58dfc98864-s8hpf,Uid:34188177-1cb2-4f9a-a0df-59150fa93682,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17f97c4febe41ebf12587a4772f315170377f5024ce530b965a024e01a483ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.270214 kubelet[3599]: E1124 00:12:08.270028 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17f97c4febe41ebf12587a4772f315170377f5024ce530b965a024e01a483ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.270214 kubelet[3599]: E1124 00:12:08.270113 3599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17f97c4febe41ebf12587a4772f315170377f5024ce530b965a024e01a483ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58dfc98864-s8hpf" Nov 24 00:12:08.270214 kubelet[3599]: E1124 00:12:08.270151 3599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17f97c4febe41ebf12587a4772f315170377f5024ce530b965a024e01a483ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58dfc98864-s8hpf" Nov 24 00:12:08.270415 kubelet[3599]: E1124 00:12:08.270214 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58dfc98864-s8hpf_calico-system(34188177-1cb2-4f9a-a0df-59150fa93682)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58dfc98864-s8hpf_calico-system(34188177-1cb2-4f9a-a0df-59150fa93682)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17f97c4febe41ebf12587a4772f315170377f5024ce530b965a024e01a483ff9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58dfc98864-s8hpf" podUID="34188177-1cb2-4f9a-a0df-59150fa93682" Nov 24 00:12:08.276965 containerd[1981]: time="2025-11-24T00:12:08.276819066Z" level=error msg="Failed to destroy network for sandbox \"42690e73201927ba29bbd165cd68b253f4107af1f589ae4a329d2857e64edf0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.278166 containerd[1981]: time="2025-11-24T00:12:08.278109915Z" level=error msg="Failed to destroy network for sandbox \"a24a3d97f58982c3b43e31020a67b3ebcf217b1f4d1c0892fa02eb61e0bad5dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.279924 containerd[1981]: time="2025-11-24T00:12:08.279761073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9xvvr,Uid:9441d7ab-9ca0-4aa4-8c69-0bae216edd81,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42690e73201927ba29bbd165cd68b253f4107af1f589ae4a329d2857e64edf0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.280112 kubelet[3599]: E1124 00:12:08.280058 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42690e73201927ba29bbd165cd68b253f4107af1f589ae4a329d2857e64edf0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.280386 kubelet[3599]: E1124 00:12:08.280126 3599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42690e73201927ba29bbd165cd68b253f4107af1f589ae4a329d2857e64edf0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9xvvr" Nov 24 00:12:08.280386 kubelet[3599]: E1124 00:12:08.280262 3599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42690e73201927ba29bbd165cd68b253f4107af1f589ae4a329d2857e64edf0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9xvvr" Nov 24 00:12:08.280386 kubelet[3599]: E1124 00:12:08.280317 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9xvvr_calico-system(9441d7ab-9ca0-4aa4-8c69-0bae216edd81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9xvvr_calico-system(9441d7ab-9ca0-4aa4-8c69-0bae216edd81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42690e73201927ba29bbd165cd68b253f4107af1f589ae4a329d2857e64edf0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:12:08.282620 containerd[1981]: time="2025-11-24T00:12:08.281968878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5ntz,Uid:32cb229b-909c-49d5-aa91-1c2bceaac746,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24a3d97f58982c3b43e31020a67b3ebcf217b1f4d1c0892fa02eb61e0bad5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.283557 kubelet[3599]: E1124 00:12:08.283293 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24a3d97f58982c3b43e31020a67b3ebcf217b1f4d1c0892fa02eb61e0bad5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 00:12:08.283557 kubelet[3599]: E1124 00:12:08.283537 3599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24a3d97f58982c3b43e31020a67b3ebcf217b1f4d1c0892fa02eb61e0bad5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5ntz" Nov 24 00:12:08.283918 kubelet[3599]: E1124 00:12:08.283871 3599 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a24a3d97f58982c3b43e31020a67b3ebcf217b1f4d1c0892fa02eb61e0bad5dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l5ntz" Nov 24 00:12:08.285056 kubelet[3599]: E1124 00:12:08.285015 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a24a3d97f58982c3b43e31020a67b3ebcf217b1f4d1c0892fa02eb61e0bad5dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:12:09.168032 systemd[1]: run-netns-cni\x2d4995ffdb\x2d6dd2\x2d6808\x2d162d\x2dbf2438037f5b.mount: Deactivated successfully. Nov 24 00:12:09.168195 systemd[1]: run-netns-cni\x2d014ed1bf\x2dea99\x2df424\x2d952b\x2d8784b0f856b0.mount: Deactivated successfully. Nov 24 00:12:09.168281 systemd[1]: run-netns-cni\x2dfafa02bd\x2d6997\x2de5b1\x2d138a\x2d97bc45397fa8.mount: Deactivated successfully. Nov 24 00:12:16.161432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622440074.mount: Deactivated successfully. Nov 24 00:12:16.362323 containerd[1981]: time="2025-11-24T00:12:16.354443691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:12:16.364297 containerd[1981]: time="2025-11-24T00:12:16.355717181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 00:12:16.383580 containerd[1981]: time="2025-11-24T00:12:16.383527545Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:12:16.388619 containerd[1981]: time="2025-11-24T00:12:16.388571637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 00:12:16.396539 containerd[1981]: time="2025-11-24T00:12:16.396493831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.349929634s" Nov 24 00:12:16.397193 containerd[1981]: time="2025-11-24T00:12:16.396689460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 00:12:16.448081 containerd[1981]: time="2025-11-24T00:12:16.447953438Z" level=info msg="CreateContainer within sandbox \"b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 00:12:16.552876 containerd[1981]: time="2025-11-24T00:12:16.549101562Z" level=info msg="Container b78de9128a8b61337d619bf95540b998e64bf229b82cfd8968abb8b156142eff: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:12:16.556513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527843910.mount: Deactivated successfully. Nov 24 00:12:16.829685 containerd[1981]: time="2025-11-24T00:12:16.829511310Z" level=info msg="CreateContainer within sandbox \"b4e6eab22545ca8c7a3269cfe21b21086254637b3624a175e9fb8f0b24d2d648\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b78de9128a8b61337d619bf95540b998e64bf229b82cfd8968abb8b156142eff\"" Nov 24 00:12:16.830881 containerd[1981]: time="2025-11-24T00:12:16.830780785Z" level=info msg="StartContainer for \"b78de9128a8b61337d619bf95540b998e64bf229b82cfd8968abb8b156142eff\"" Nov 24 00:12:16.850703 containerd[1981]: time="2025-11-24T00:12:16.850593977Z" level=info msg="connecting to shim b78de9128a8b61337d619bf95540b998e64bf229b82cfd8968abb8b156142eff" address="unix:///run/containerd/s/8f6b9bfc541412ca40bfbc7a1d27c3c650edbae45cc61fb904381da96bded48d" protocol=ttrpc version=3 Nov 24 00:12:17.000354 systemd[1]: Started cri-containerd-b78de9128a8b61337d619bf95540b998e64bf229b82cfd8968abb8b156142eff.scope - libcontainer container b78de9128a8b61337d619bf95540b998e64bf229b82cfd8968abb8b156142eff. Nov 24 00:12:17.199271 containerd[1981]: time="2025-11-24T00:12:17.199045140Z" level=info msg="StartContainer for \"b78de9128a8b61337d619bf95540b998e64bf229b82cfd8968abb8b156142eff\" returns successfully" Nov 24 00:12:17.362544 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 00:12:17.363241 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 00:12:17.707368 kubelet[3599]: I1124 00:12:17.707067 3599 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtzbn\" (UniqueName: \"kubernetes.io/projected/34188177-1cb2-4f9a-a0df-59150fa93682-kube-api-access-gtzbn\") pod \"34188177-1cb2-4f9a-a0df-59150fa93682\" (UID: \"34188177-1cb2-4f9a-a0df-59150fa93682\") " Nov 24 00:12:17.707368 kubelet[3599]: I1124 00:12:17.707127 3599 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34188177-1cb2-4f9a-a0df-59150fa93682-whisker-ca-bundle\") pod \"34188177-1cb2-4f9a-a0df-59150fa93682\" (UID: \"34188177-1cb2-4f9a-a0df-59150fa93682\") " Nov 24 00:12:17.707368 kubelet[3599]: I1124 00:12:17.707160 3599 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34188177-1cb2-4f9a-a0df-59150fa93682-whisker-backend-key-pair\") pod \"34188177-1cb2-4f9a-a0df-59150fa93682\" (UID: \"34188177-1cb2-4f9a-a0df-59150fa93682\") " Nov 24 00:12:17.715168 kubelet[3599]: I1124 00:12:17.715101 3599 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34188177-1cb2-4f9a-a0df-59150fa93682-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "34188177-1cb2-4f9a-a0df-59150fa93682" (UID: "34188177-1cb2-4f9a-a0df-59150fa93682"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 00:12:17.732375 kubelet[3599]: I1124 00:12:17.732186 3599 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34188177-1cb2-4f9a-a0df-59150fa93682-kube-api-access-gtzbn" (OuterVolumeSpecName: "kube-api-access-gtzbn") pod "34188177-1cb2-4f9a-a0df-59150fa93682" (UID: "34188177-1cb2-4f9a-a0df-59150fa93682"). InnerVolumeSpecName "kube-api-access-gtzbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 00:12:17.734248 kubelet[3599]: I1124 00:12:17.734161 3599 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34188177-1cb2-4f9a-a0df-59150fa93682-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "34188177-1cb2-4f9a-a0df-59150fa93682" (UID: "34188177-1cb2-4f9a-a0df-59150fa93682"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 00:12:17.735130 systemd[1]: var-lib-kubelet-pods-34188177\x2d1cb2\x2d4f9a\x2da0df\x2d59150fa93682-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 00:12:17.743007 systemd[1]: var-lib-kubelet-pods-34188177\x2d1cb2\x2d4f9a\x2da0df\x2d59150fa93682-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgtzbn.mount: Deactivated successfully. Nov 24 00:12:17.808719 kubelet[3599]: I1124 00:12:17.808667 3599 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gtzbn\" (UniqueName: \"kubernetes.io/projected/34188177-1cb2-4f9a-a0df-59150fa93682-kube-api-access-gtzbn\") on node \"ip-172-31-17-28\" DevicePath \"\"" Nov 24 00:12:17.808719 kubelet[3599]: I1124 00:12:17.808712 3599 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34188177-1cb2-4f9a-a0df-59150fa93682-whisker-ca-bundle\") on node \"ip-172-31-17-28\" DevicePath \"\"" Nov 24 00:12:17.808719 kubelet[3599]: I1124 00:12:17.808725 3599 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34188177-1cb2-4f9a-a0df-59150fa93682-whisker-backend-key-pair\") on node \"ip-172-31-17-28\" DevicePath \"\"" Nov 24 00:12:18.113611 systemd[1]: Removed slice kubepods-besteffort-pod34188177_1cb2_4f9a_a0df_59150fa93682.slice - libcontainer container kubepods-besteffort-pod34188177_1cb2_4f9a_a0df_59150fa93682.slice. Nov 24 00:12:18.161876 kubelet[3599]: I1124 00:12:18.161204 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tjn4m" podStartSLOduration=2.587154652 podStartE2EDuration="27.161181199s" podCreationTimestamp="2025-11-24 00:11:51 +0000 UTC" firstStartedPulling="2025-11-24 00:11:51.823498857 +0000 UTC m=+25.417299646" lastFinishedPulling="2025-11-24 00:12:16.397525416 +0000 UTC m=+49.991326193" observedRunningTime="2025-11-24 00:12:18.145160798 +0000 UTC m=+51.738961593" watchObservedRunningTime="2025-11-24 00:12:18.161181199 +0000 UTC m=+51.754982007" Nov 24 00:12:18.303737 systemd[1]: Created slice kubepods-besteffort-pod036bfdfd_8582_4bd8_b46a_aee9f6d00cad.slice - libcontainer container kubepods-besteffort-pod036bfdfd_8582_4bd8_b46a_aee9f6d00cad.slice. Nov 24 00:12:18.415669 kubelet[3599]: I1124 00:12:18.415548 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d64lc\" (UniqueName: \"kubernetes.io/projected/036bfdfd-8582-4bd8-b46a-aee9f6d00cad-kube-api-access-d64lc\") pod \"whisker-67965d874b-g8xwp\" (UID: \"036bfdfd-8582-4bd8-b46a-aee9f6d00cad\") " pod="calico-system/whisker-67965d874b-g8xwp" Nov 24 00:12:18.417770 kubelet[3599]: I1124 00:12:18.415687 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/036bfdfd-8582-4bd8-b46a-aee9f6d00cad-whisker-backend-key-pair\") pod \"whisker-67965d874b-g8xwp\" (UID: \"036bfdfd-8582-4bd8-b46a-aee9f6d00cad\") " pod="calico-system/whisker-67965d874b-g8xwp" Nov 24 00:12:18.417770 kubelet[3599]: I1124 00:12:18.417685 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/036bfdfd-8582-4bd8-b46a-aee9f6d00cad-whisker-ca-bundle\") pod \"whisker-67965d874b-g8xwp\" (UID: \"036bfdfd-8582-4bd8-b46a-aee9f6d00cad\") " pod="calico-system/whisker-67965d874b-g8xwp" Nov 24 00:12:18.583197 containerd[1981]: time="2025-11-24T00:12:18.583138418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5ntz,Uid:32cb229b-909c-49d5-aa91-1c2bceaac746,Namespace:calico-system,Attempt:0,}" Nov 24 00:12:18.590740 kubelet[3599]: I1124 00:12:18.590383 3599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34188177-1cb2-4f9a-a0df-59150fa93682" path="/var/lib/kubelet/pods/34188177-1cb2-4f9a-a0df-59150fa93682/volumes" Nov 24 00:12:18.622362 containerd[1981]: time="2025-11-24T00:12:18.622052494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67965d874b-g8xwp,Uid:036bfdfd-8582-4bd8-b46a-aee9f6d00cad,Namespace:calico-system,Attempt:0,}" Nov 24 00:12:19.279038 kubelet[3599]: I1124 00:12:19.278999 3599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:12:19.342581 (udev-worker)[4677]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:12:19.347713 systemd-networkd[1746]: cali60e49e39745: Link UP Nov 24 00:12:19.354628 systemd-networkd[1746]: cali60e49e39745: Gained carrier Nov 24 00:12:19.424209 containerd[1981]: 2025-11-24 00:12:18.685 [INFO][4712] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:12:19.424209 containerd[1981]: 2025-11-24 00:12:18.771 [INFO][4712] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0 whisker-67965d874b- calico-system 036bfdfd-8582-4bd8-b46a-aee9f6d00cad 929 0 2025-11-24 00:12:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67965d874b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-28 whisker-67965d874b-g8xwp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali60e49e39745 [] [] }} ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Namespace="calico-system" Pod="whisker-67965d874b-g8xwp" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-" Nov 24 00:12:19.424209 containerd[1981]: 2025-11-24 00:12:18.771 [INFO][4712] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Namespace="calico-system" Pod="whisker-67965d874b-g8xwp" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" Nov 24 00:12:19.424209 containerd[1981]: 2025-11-24 00:12:19.154 [INFO][4730] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" HandleID="k8s-pod-network.9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Workload="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" Nov 24 00:12:19.424548 containerd[1981]: 2025-11-24 00:12:19.157 [INFO][4730] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" HandleID="k8s-pod-network.9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Workload="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332400), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"whisker-67965d874b-g8xwp", "timestamp":"2025-11-24 00:12:19.154428732 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:12:19.424548 containerd[1981]: 2025-11-24 00:12:19.157 [INFO][4730] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:12:19.424548 containerd[1981]: 2025-11-24 00:12:19.157 [INFO][4730] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:12:19.424548 containerd[1981]: 2025-11-24 00:12:19.158 [INFO][4730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Nov 24 00:12:19.424548 containerd[1981]: 2025-11-24 00:12:19.177 [INFO][4730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" host="ip-172-31-17-28" Nov 24 00:12:19.424548 containerd[1981]: 2025-11-24 00:12:19.203 [INFO][4730] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-28" Nov 24 00:12:19.424548 containerd[1981]: 2025-11-24 00:12:19.221 [INFO][4730] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:19.424548 containerd[1981]: 2025-11-24 00:12:19.229 [INFO][4730] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:19.424548 containerd[1981]: 2025-11-24 00:12:19.236 [INFO][4730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:19.426944 containerd[1981]: 2025-11-24 00:12:19.236 [INFO][4730] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" host="ip-172-31-17-28" Nov 24 00:12:19.426944 containerd[1981]: 2025-11-24 00:12:19.248 [INFO][4730] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd Nov 24 00:12:19.426944 containerd[1981]: 2025-11-24 00:12:19.259 [INFO][4730] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" host="ip-172-31-17-28" Nov 24 00:12:19.426944 containerd[1981]: 2025-11-24 00:12:19.298 [INFO][4730] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.120.65/26] block=192.168.120.64/26 handle="k8s-pod-network.9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" host="ip-172-31-17-28" Nov 24 00:12:19.426944 containerd[1981]: 2025-11-24 00:12:19.298 [INFO][4730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.65/26] handle="k8s-pod-network.9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" host="ip-172-31-17-28" Nov 24 00:12:19.426944 containerd[1981]: 2025-11-24 00:12:19.298 [INFO][4730] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:12:19.426944 containerd[1981]: 2025-11-24 00:12:19.298 [INFO][4730] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.120.65/26] IPv6=[] ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" HandleID="k8s-pod-network.9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Workload="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" Nov 24 00:12:19.427232 containerd[1981]: 2025-11-24 00:12:19.307 [INFO][4712] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Namespace="calico-system" Pod="whisker-67965d874b-g8xwp" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0", GenerateName:"whisker-67965d874b-", Namespace:"calico-system", SelfLink:"", UID:"036bfdfd-8582-4bd8-b46a-aee9f6d00cad", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67965d874b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"whisker-67965d874b-g8xwp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali60e49e39745", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:19.427232 containerd[1981]: 2025-11-24 00:12:19.307 [INFO][4712] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.65/32] ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Namespace="calico-system" Pod="whisker-67965d874b-g8xwp" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" Nov 24 00:12:19.427381 containerd[1981]: 2025-11-24 00:12:19.307 [INFO][4712] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e49e39745 ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Namespace="calico-system" Pod="whisker-67965d874b-g8xwp" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" Nov 24 00:12:19.427381 containerd[1981]: 2025-11-24 00:12:19.360 [INFO][4712] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Namespace="calico-system" Pod="whisker-67965d874b-g8xwp" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" Nov 24 00:12:19.427461 containerd[1981]: 2025-11-24 00:12:19.364 [INFO][4712] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Namespace="calico-system" Pod="whisker-67965d874b-g8xwp" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0", GenerateName:"whisker-67965d874b-", Namespace:"calico-system", SelfLink:"", UID:"036bfdfd-8582-4bd8-b46a-aee9f6d00cad", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 12, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67965d874b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd", Pod:"whisker-67965d874b-g8xwp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.120.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali60e49e39745", MAC:"92:86:88:93:13:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:19.427573 containerd[1981]: 2025-11-24 00:12:19.417 [INFO][4712] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" Namespace="calico-system" Pod="whisker-67965d874b-g8xwp" WorkloadEndpoint="ip--172--31--17--28-k8s-whisker--67965d874b--g8xwp-eth0" Nov 24 00:12:19.579014 containerd[1981]: time="2025-11-24T00:12:19.576998277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl56c,Uid:4f493023-ee80-40fd-b330-73391b1466e0,Namespace:kube-system,Attempt:0,}" Nov 24 00:12:19.579838 containerd[1981]: time="2025-11-24T00:12:19.579549126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7cv4s,Uid:133e718f-e16a-471d-9832-196325dfbc53,Namespace:kube-system,Attempt:0,}" Nov 24 00:12:19.579838 containerd[1981]: time="2025-11-24T00:12:19.579674182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bdc98bdb-jnjxv,Uid:abebab1e-f092-4a6b-94e1-1c92a233e08a,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:12:19.803123 systemd-networkd[1746]: cali360e4b0995f: Link UP Nov 24 00:12:19.805245 systemd-networkd[1746]: cali360e4b0995f: Gained carrier Nov 24 00:12:19.864440 containerd[1981]: 2025-11-24 00:12:18.676 [INFO][4711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:12:19.864440 containerd[1981]: 2025-11-24 00:12:18.768 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0 csi-node-driver- calico-system 32cb229b-909c-49d5-aa91-1c2bceaac746 732 0 2025-11-24 00:11:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-28 csi-node-driver-l5ntz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali360e4b0995f [] [] }} ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Namespace="calico-system" Pod="csi-node-driver-l5ntz" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-" Nov 24 00:12:19.864440 containerd[1981]: 2025-11-24 00:12:18.769 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Namespace="calico-system" Pod="csi-node-driver-l5ntz" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" Nov 24 00:12:19.864440 containerd[1981]: 2025-11-24 00:12:19.155 [INFO][4731] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" HandleID="k8s-pod-network.4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Workload="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.157 [INFO][4731] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" HandleID="k8s-pod-network.4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Workload="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc3d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"csi-node-driver-l5ntz", "timestamp":"2025-11-24 00:12:19.155227433 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.157 [INFO][4731] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.299 [INFO][4731] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.299 [INFO][4731] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.347 [INFO][4731] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" host="ip-172-31-17-28" Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.411 [INFO][4731] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-28" Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.443 [INFO][4731] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.460 [INFO][4731] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.497 [INFO][4731] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:19.866533 containerd[1981]: 2025-11-24 00:12:19.497 [INFO][4731] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" host="ip-172-31-17-28" Nov 24 00:12:19.867838 containerd[1981]: 2025-11-24 00:12:19.566 [INFO][4731] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1 Nov 24 00:12:19.867838 containerd[1981]: 2025-11-24 00:12:19.607 [INFO][4731] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" host="ip-172-31-17-28" Nov 24 00:12:19.867838 containerd[1981]: 2025-11-24 00:12:19.749 [INFO][4731] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.120.66/26] block=192.168.120.64/26 handle="k8s-pod-network.4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" host="ip-172-31-17-28" Nov 24 00:12:19.867838 containerd[1981]: 2025-11-24 00:12:19.749 [INFO][4731] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.66/26] handle="k8s-pod-network.4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" host="ip-172-31-17-28" Nov 24 00:12:19.867838 containerd[1981]: 2025-11-24 00:12:19.750 [INFO][4731] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:12:19.867838 containerd[1981]: 2025-11-24 00:12:19.750 [INFO][4731] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.120.66/26] IPv6=[] ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" HandleID="k8s-pod-network.4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Workload="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" Nov 24 00:12:19.870624 containerd[1981]: 2025-11-24 00:12:19.787 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Namespace="calico-system" Pod="csi-node-driver-l5ntz" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32cb229b-909c-49d5-aa91-1c2bceaac746", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"csi-node-driver-l5ntz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali360e4b0995f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:19.870758 containerd[1981]: 2025-11-24 00:12:19.787 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.66/32] ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Namespace="calico-system" Pod="csi-node-driver-l5ntz" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" Nov 24 00:12:19.870758 containerd[1981]: 2025-11-24 00:12:19.787 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali360e4b0995f ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Namespace="calico-system" Pod="csi-node-driver-l5ntz" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" Nov 24 00:12:19.870758 containerd[1981]: 2025-11-24 00:12:19.802 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Namespace="calico-system" Pod="csi-node-driver-l5ntz" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" Nov 24 00:12:19.872027 containerd[1981]: 2025-11-24 00:12:19.804 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Namespace="calico-system" Pod="csi-node-driver-l5ntz" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32cb229b-909c-49d5-aa91-1c2bceaac746", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1", Pod:"csi-node-driver-l5ntz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali360e4b0995f", MAC:"86:c0:7a:29:9b:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:19.872144 containerd[1981]: 2025-11-24 00:12:19.850 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" Namespace="calico-system" Pod="csi-node-driver-l5ntz" WorkloadEndpoint="ip--172--31--17--28-k8s-csi--node--driver--l5ntz-eth0" Nov 24 00:12:20.230091 containerd[1981]: time="2025-11-24T00:12:20.230027369Z" level=info msg="connecting to shim 9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd" address="unix:///run/containerd/s/b9973add5cfbb77ec14e14a7ccd446b677664a4c73461bd2b6ca612ce92382e6" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:12:20.242036 containerd[1981]: time="2025-11-24T00:12:20.241767747Z" level=info msg="connecting to shim 4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1" address="unix:///run/containerd/s/4f3703543541f3703fd2d43460507010eae29a322f2a321dd2dc6831a9c8e465" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:12:20.388090 systemd[1]: Started cri-containerd-9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd.scope - libcontainer container 9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd. Nov 24 00:12:20.482184 systemd[1]: Started cri-containerd-4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1.scope - libcontainer container 4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1. Nov 24 00:12:20.533552 systemd-networkd[1746]: cali0729c46fe81: Link UP Nov 24 00:12:20.538859 systemd-networkd[1746]: cali0729c46fe81: Gained carrier Nov 24 00:12:20.589543 containerd[1981]: 2025-11-24 00:12:19.934 [INFO][4845] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:12:20.589543 containerd[1981]: 2025-11-24 00:12:20.036 [INFO][4845] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0 coredns-668d6bf9bc- kube-system 133e718f-e16a-471d-9832-196325dfbc53 857 0 2025-11-24 00:11:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-28 coredns-668d6bf9bc-7cv4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0729c46fe81 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Namespace="kube-system" Pod="coredns-668d6bf9bc-7cv4s" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-" Nov 24 00:12:20.589543 containerd[1981]: 2025-11-24 00:12:20.036 [INFO][4845] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Namespace="kube-system" Pod="coredns-668d6bf9bc-7cv4s" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" Nov 24 00:12:20.589543 containerd[1981]: 2025-11-24 00:12:20.246 [INFO][4888] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" HandleID="k8s-pod-network.16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Workload="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.246 [INFO][4888] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" HandleID="k8s-pod-network.16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Workload="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e7f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-28", "pod":"coredns-668d6bf9bc-7cv4s", "timestamp":"2025-11-24 00:12:20.246317725 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.246 [INFO][4888] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.246 [INFO][4888] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.246 [INFO][4888] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.300 [INFO][4888] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" host="ip-172-31-17-28" Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.326 [INFO][4888] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-28" Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.368 [INFO][4888] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.381 [INFO][4888] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.407 [INFO][4888] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:20.590450 containerd[1981]: 2025-11-24 00:12:20.407 [INFO][4888] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" host="ip-172-31-17-28" Nov 24 00:12:20.591157 containerd[1981]: 2025-11-24 00:12:20.418 [INFO][4888] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3 Nov 24 00:12:20.591157 containerd[1981]: 2025-11-24 00:12:20.433 [INFO][4888] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" host="ip-172-31-17-28" Nov 24 00:12:20.591157 containerd[1981]: 2025-11-24 00:12:20.455 [INFO][4888] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.120.67/26] block=192.168.120.64/26 handle="k8s-pod-network.16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" host="ip-172-31-17-28" Nov 24 00:12:20.591157 containerd[1981]: 2025-11-24 00:12:20.456 [INFO][4888] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.67/26] handle="k8s-pod-network.16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" host="ip-172-31-17-28" Nov 24 00:12:20.591157 containerd[1981]: 2025-11-24 00:12:20.456 [INFO][4888] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:12:20.591157 containerd[1981]: 2025-11-24 00:12:20.456 [INFO][4888] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.120.67/26] IPv6=[] ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" HandleID="k8s-pod-network.16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Workload="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" Nov 24 00:12:20.591786 containerd[1981]: 2025-11-24 00:12:20.489 [INFO][4845] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Namespace="kube-system" Pod="coredns-668d6bf9bc-7cv4s" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"133e718f-e16a-471d-9832-196325dfbc53", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"coredns-668d6bf9bc-7cv4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0729c46fe81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:20.591786 containerd[1981]: 2025-11-24 00:12:20.497 [INFO][4845] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.67/32] ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Namespace="kube-system" Pod="coredns-668d6bf9bc-7cv4s" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" Nov 24 00:12:20.591786 containerd[1981]: 2025-11-24 00:12:20.498 [INFO][4845] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0729c46fe81 ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Namespace="kube-system" Pod="coredns-668d6bf9bc-7cv4s" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" Nov 24 00:12:20.591786 containerd[1981]: 2025-11-24 00:12:20.540 [INFO][4845] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Namespace="kube-system" Pod="coredns-668d6bf9bc-7cv4s" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" Nov 24 00:12:20.591786 containerd[1981]: 2025-11-24 00:12:20.541 [INFO][4845] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Namespace="kube-system" Pod="coredns-668d6bf9bc-7cv4s" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"133e718f-e16a-471d-9832-196325dfbc53", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3", Pod:"coredns-668d6bf9bc-7cv4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0729c46fe81", MAC:"9a:97:3d:2f:4a:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:20.591786 containerd[1981]: 2025-11-24 00:12:20.572 [INFO][4845] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" Namespace="kube-system" Pod="coredns-668d6bf9bc-7cv4s" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--7cv4s-eth0" Nov 24 00:12:20.625935 systemd-networkd[1746]: cali9f32bc39f10: Link UP Nov 24 00:12:20.627485 systemd-networkd[1746]: cali9f32bc39f10: Gained carrier Nov 24 00:12:20.657068 kubelet[3599]: I1124 00:12:20.656474 3599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 00:12:20.676559 containerd[1981]: time="2025-11-24T00:12:20.676040384Z" level=info msg="connecting to shim 16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3" address="unix:///run/containerd/s/ef70982c56473c573d06c0cad842aeee230c4b52bb4342c1429d7fb3b8bc7ee8" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:19.889 [INFO][4832] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.005 [INFO][4832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0 coredns-668d6bf9bc- kube-system 4f493023-ee80-40fd-b330-73391b1466e0 845 0 2025-11-24 00:11:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-28 coredns-668d6bf9bc-zl56c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9f32bc39f10 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl56c" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.005 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl56c" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.252 [INFO][4884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" HandleID="k8s-pod-network.b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Workload="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.275 [INFO][4884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" HandleID="k8s-pod-network.b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Workload="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312800), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-28", "pod":"coredns-668d6bf9bc-zl56c", "timestamp":"2025-11-24 00:12:20.252786139 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.276 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.459 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.459 [INFO][4884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.485 [INFO][4884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" host="ip-172-31-17-28" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.503 [INFO][4884] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-28" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.528 [INFO][4884] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.539 [INFO][4884] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.553 [INFO][4884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.553 [INFO][4884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" host="ip-172-31-17-28" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.561 [INFO][4884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715 Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.577 [INFO][4884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" host="ip-172-31-17-28" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.607 [INFO][4884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.120.68/26] block=192.168.120.64/26 handle="k8s-pod-network.b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" host="ip-172-31-17-28" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.608 [INFO][4884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.68/26] handle="k8s-pod-network.b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" host="ip-172-31-17-28" Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.608 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:12:20.739819 containerd[1981]: 2025-11-24 00:12:20.611 [INFO][4884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.120.68/26] IPv6=[] ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" HandleID="k8s-pod-network.b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Workload="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" Nov 24 00:12:20.747648 containerd[1981]: 2025-11-24 00:12:20.618 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl56c" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4f493023-ee80-40fd-b330-73391b1466e0", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"coredns-668d6bf9bc-zl56c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f32bc39f10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:20.747648 containerd[1981]: 2025-11-24 00:12:20.620 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.68/32] ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl56c" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" Nov 24 00:12:20.747648 containerd[1981]: 2025-11-24 00:12:20.620 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f32bc39f10 ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl56c" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" Nov 24 00:12:20.747648 containerd[1981]: 2025-11-24 00:12:20.631 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl56c" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" Nov 24 00:12:20.747648 containerd[1981]: 2025-11-24 00:12:20.636 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl56c" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4f493023-ee80-40fd-b330-73391b1466e0", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715", Pod:"coredns-668d6bf9bc-zl56c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.120.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f32bc39f10", MAC:"66:69:a2:59:86:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:20.747648 containerd[1981]: 2025-11-24 00:12:20.686 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" Namespace="kube-system" Pod="coredns-668d6bf9bc-zl56c" WorkloadEndpoint="ip--172--31--17--28-k8s-coredns--668d6bf9bc--zl56c-eth0" Nov 24 00:12:20.901353 systemd-networkd[1746]: cali531c328ab9c: Link UP Nov 24 00:12:20.910068 systemd-networkd[1746]: cali531c328ab9c: Gained carrier Nov 24 00:12:20.928555 systemd[1]: Started cri-containerd-16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3.scope - libcontainer container 16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3. Nov 24 00:12:20.949646 containerd[1981]: time="2025-11-24T00:12:20.949387800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67965d874b-g8xwp,Uid:036bfdfd-8582-4bd8-b46a-aee9f6d00cad,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b7281b42b2e44298d6b05a98a3a1236814f100a2eb6d614c62949265b9541dd\"" Nov 24 00:12:20.956032 containerd[1981]: time="2025-11-24T00:12:20.955989660Z" level=info msg="connecting to shim b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715" address="unix:///run/containerd/s/fffb8570ea20e30fa3b803471f594de457326442c05df356bebb65f861ef740c" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:12:20.960237 containerd[1981]: time="2025-11-24T00:12:20.960163540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l5ntz,Uid:32cb229b-909c-49d5-aa91-1c2bceaac746,Namespace:calico-system,Attempt:0,} returns sandbox id \"4801b8121db275e298406f9a509a544de4246c8ac46560e46370040d435a04c1\"" Nov 24 00:12:20.990334 containerd[1981]: time="2025-11-24T00:12:20.989176321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:19.942 [INFO][4843] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.035 [INFO][4843] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0 calico-apiserver-68bdc98bdb- calico-apiserver abebab1e-f092-4a6b-94e1-1c92a233e08a 855 0 2025-11-24 00:11:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68bdc98bdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-28 calico-apiserver-68bdc98bdb-jnjxv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali531c328ab9c [] [] }} ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-jnjxv" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.035 [INFO][4843] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-jnjxv" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.427 [INFO][4891] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" HandleID="k8s-pod-network.da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Workload="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.430 [INFO][4891] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" HandleID="k8s-pod-network.da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Workload="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-28", "pod":"calico-apiserver-68bdc98bdb-jnjxv", "timestamp":"2025-11-24 00:12:20.42693031 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.430 [INFO][4891] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.609 [INFO][4891] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.609 [INFO][4891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.648 [INFO][4891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" host="ip-172-31-17-28" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.687 [INFO][4891] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-28" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.749 [INFO][4891] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.758 [INFO][4891] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.768 [INFO][4891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.768 [INFO][4891] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" host="ip-172-31-17-28" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.774 [INFO][4891] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.794 [INFO][4891] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" host="ip-172-31-17-28" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.811 [INFO][4891] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.120.69/26] block=192.168.120.64/26 handle="k8s-pod-network.da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" host="ip-172-31-17-28" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.811 [INFO][4891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.69/26] handle="k8s-pod-network.da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" host="ip-172-31-17-28" Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.811 [INFO][4891] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:12:21.011010 containerd[1981]: 2025-11-24 00:12:20.811 [INFO][4891] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.120.69/26] IPv6=[] ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" HandleID="k8s-pod-network.da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Workload="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" Nov 24 00:12:21.012027 containerd[1981]: 2025-11-24 00:12:20.852 [INFO][4843] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-jnjxv" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0", GenerateName:"calico-apiserver-68bdc98bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"abebab1e-f092-4a6b-94e1-1c92a233e08a", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bdc98bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"calico-apiserver-68bdc98bdb-jnjxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali531c328ab9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:21.012027 containerd[1981]: 2025-11-24 00:12:20.852 [INFO][4843] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.69/32] ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-jnjxv" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" Nov 24 00:12:21.012027 containerd[1981]: 2025-11-24 00:12:20.852 [INFO][4843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali531c328ab9c ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-jnjxv" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" Nov 24 00:12:21.012027 containerd[1981]: 2025-11-24 00:12:20.919 [INFO][4843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-jnjxv" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" Nov 24 00:12:21.012027 containerd[1981]: 2025-11-24 00:12:20.923 [INFO][4843] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-jnjxv" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0", GenerateName:"calico-apiserver-68bdc98bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"abebab1e-f092-4a6b-94e1-1c92a233e08a", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bdc98bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf", Pod:"calico-apiserver-68bdc98bdb-jnjxv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali531c328ab9c", MAC:"c6:f2:ba:bb:d3:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:21.012027 containerd[1981]: 2025-11-24 00:12:20.974 [INFO][4843] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-jnjxv" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--jnjxv-eth0" Nov 24 00:12:21.155482 systemd[1]: Started cri-containerd-b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715.scope - libcontainer container b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715. Nov 24 00:12:21.245114 systemd-networkd[1746]: cali60e49e39745: Gained IPv6LL Nov 24 00:12:21.281158 containerd[1981]: time="2025-11-24T00:12:21.281072668Z" level=info msg="connecting to shim da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf" address="unix:///run/containerd/s/56a853329a5fcf9ef367e3fbd1ede40f4347d45413b93b32b1f02f2c881c3f6b" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:12:21.297510 containerd[1981]: time="2025-11-24T00:12:21.296028587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7cv4s,Uid:133e718f-e16a-471d-9832-196325dfbc53,Namespace:kube-system,Attempt:0,} returns sandbox id \"16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3\"" Nov 24 00:12:21.459201 systemd[1]: Started cri-containerd-da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf.scope - libcontainer container da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf. Nov 24 00:12:21.462416 containerd[1981]: time="2025-11-24T00:12:21.462304951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zl56c,Uid:4f493023-ee80-40fd-b330-73391b1466e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715\"" Nov 24 00:12:21.480360 containerd[1981]: time="2025-11-24T00:12:21.480244973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:21.484690 containerd[1981]: time="2025-11-24T00:12:21.483147059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:12:21.486001 containerd[1981]: time="2025-11-24T00:12:21.483309168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:12:21.492417 kubelet[3599]: E1124 00:12:21.488551 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:12:21.502918 kubelet[3599]: E1124 00:12:21.501199 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:12:21.539678 containerd[1981]: time="2025-11-24T00:12:21.539359621Z" level=info msg="CreateContainer within sandbox \"16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:12:21.541899 kubelet[3599]: E1124 00:12:21.541477 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:21.543002 containerd[1981]: time="2025-11-24T00:12:21.542964894Z" level=info msg="CreateContainer within sandbox \"b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 00:12:21.555279 containerd[1981]: time="2025-11-24T00:12:21.555082790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:12:21.617270 containerd[1981]: time="2025-11-24T00:12:21.617227735Z" level=info msg="Container 029256e81dbfcb8615b2db5a9bf5bc4c73525c98405c170d5d31a7c123f62f20: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:12:21.619193 containerd[1981]: time="2025-11-24T00:12:21.619038304Z" level=info msg="Container 0ded23dbd78e3ae649743d98076a60648fc62dce5ea79c03dd0e6b13da99c5f9: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:12:21.626370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1570463778.mount: Deactivated successfully. Nov 24 00:12:21.626505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293147252.mount: Deactivated successfully. Nov 24 00:12:21.647895 containerd[1981]: time="2025-11-24T00:12:21.647731277Z" level=info msg="CreateContainer within sandbox \"b60db3cf13e6420f0f042f21fa883274f6ed6aff4b212719dda24bbccf5e5715\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"029256e81dbfcb8615b2db5a9bf5bc4c73525c98405c170d5d31a7c123f62f20\"" Nov 24 00:12:21.649714 containerd[1981]: time="2025-11-24T00:12:21.649678769Z" level=info msg="CreateContainer within sandbox \"16ea60c7db03b2034af65661a135c0703cec4ff376953db09f9eb8f85ad5dad3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ded23dbd78e3ae649743d98076a60648fc62dce5ea79c03dd0e6b13da99c5f9\"" Nov 24 00:12:21.653907 containerd[1981]: time="2025-11-24T00:12:21.653866789Z" level=info msg="StartContainer for \"0ded23dbd78e3ae649743d98076a60648fc62dce5ea79c03dd0e6b13da99c5f9\"" Nov 24 00:12:21.655389 containerd[1981]: time="2025-11-24T00:12:21.655353759Z" level=info msg="StartContainer for \"029256e81dbfcb8615b2db5a9bf5bc4c73525c98405c170d5d31a7c123f62f20\"" Nov 24 00:12:21.656453 containerd[1981]: time="2025-11-24T00:12:21.656418036Z" level=info msg="connecting to shim 029256e81dbfcb8615b2db5a9bf5bc4c73525c98405c170d5d31a7c123f62f20" address="unix:///run/containerd/s/fffb8570ea20e30fa3b803471f594de457326442c05df356bebb65f861ef740c" protocol=ttrpc version=3 Nov 24 00:12:21.657042 containerd[1981]: time="2025-11-24T00:12:21.656806065Z" level=info msg="connecting to shim 0ded23dbd78e3ae649743d98076a60648fc62dce5ea79c03dd0e6b13da99c5f9" address="unix:///run/containerd/s/ef70982c56473c573d06c0cad842aeee230c4b52bb4342c1429d7fb3b8bc7ee8" protocol=ttrpc version=3 Nov 24 00:12:21.720368 systemd[1]: Started cri-containerd-029256e81dbfcb8615b2db5a9bf5bc4c73525c98405c170d5d31a7c123f62f20.scope - libcontainer container 029256e81dbfcb8615b2db5a9bf5bc4c73525c98405c170d5d31a7c123f62f20. Nov 24 00:12:21.723671 systemd[1]: Started cri-containerd-0ded23dbd78e3ae649743d98076a60648fc62dce5ea79c03dd0e6b13da99c5f9.scope - libcontainer container 0ded23dbd78e3ae649743d98076a60648fc62dce5ea79c03dd0e6b13da99c5f9. Nov 24 00:12:21.849997 containerd[1981]: time="2025-11-24T00:12:21.849817273Z" level=info msg="StartContainer for \"029256e81dbfcb8615b2db5a9bf5bc4c73525c98405c170d5d31a7c123f62f20\" returns successfully" Nov 24 00:12:21.863051 containerd[1981]: time="2025-11-24T00:12:21.862813546Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:21.868916 containerd[1981]: time="2025-11-24T00:12:21.867757299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:12:21.869070 containerd[1981]: time="2025-11-24T00:12:21.867796921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:12:21.869646 kubelet[3599]: E1124 00:12:21.869380 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:12:21.870084 kubelet[3599]: E1124 00:12:21.869662 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:12:21.873365 kubelet[3599]: E1124 00:12:21.870579 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7637e2ea3ae94bb89edf74c1cba02e3f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d64lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67965d874b-g8xwp_calico-system(036bfdfd-8582-4bd8-b46a-aee9f6d00cad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:21.875633 containerd[1981]: time="2025-11-24T00:12:21.873797100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:12:21.879058 systemd-networkd[1746]: cali360e4b0995f: Gained IPv6LL Nov 24 00:12:21.892626 containerd[1981]: time="2025-11-24T00:12:21.892437578Z" level=info msg="StartContainer for \"0ded23dbd78e3ae649743d98076a60648fc62dce5ea79c03dd0e6b13da99c5f9\" returns successfully" Nov 24 00:12:21.955424 containerd[1981]: time="2025-11-24T00:12:21.955244975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bdc98bdb-jnjxv,Uid:abebab1e-f092-4a6b-94e1-1c92a233e08a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"da339c15de07ee937f35e12446ef8cf5633d34e9157f870c3bdaa1e8430d50bf\"" Nov 24 00:12:22.071126 systemd-networkd[1746]: cali9f32bc39f10: Gained IPv6LL Nov 24 00:12:22.204077 containerd[1981]: time="2025-11-24T00:12:22.203998281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:22.206376 containerd[1981]: time="2025-11-24T00:12:22.206226890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:12:22.206548 containerd[1981]: time="2025-11-24T00:12:22.206276001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:12:22.207029 kubelet[3599]: E1124 00:12:22.206980 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:12:22.207134 kubelet[3599]: E1124 00:12:22.207045 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:12:22.207696 kubelet[3599]: E1124 00:12:22.207647 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:22.208494 containerd[1981]: time="2025-11-24T00:12:22.208466559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:12:22.213986 kubelet[3599]: E1124 00:12:22.213912 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:12:22.275338 kubelet[3599]: E1124 00:12:22.275255 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:12:22.401160 kubelet[3599]: I1124 00:12:22.393570 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zl56c" podStartSLOduration=52.38994122 podStartE2EDuration="52.38994122s" podCreationTimestamp="2025-11-24 00:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:12:22.362475123 +0000 UTC m=+55.956275924" watchObservedRunningTime="2025-11-24 00:12:22.38994122 +0000 UTC m=+55.983742017" Nov 24 00:12:22.469869 containerd[1981]: time="2025-11-24T00:12:22.469223424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:22.474618 containerd[1981]: time="2025-11-24T00:12:22.474439212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:12:22.475008 containerd[1981]: time="2025-11-24T00:12:22.474444965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:12:22.475586 kubelet[3599]: E1124 00:12:22.475397 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:12:22.475586 kubelet[3599]: E1124 00:12:22.475544 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:12:22.477295 kubelet[3599]: E1124 00:12:22.475949 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d64lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67965d874b-g8xwp_calico-system(036bfdfd-8582-4bd8-b46a-aee9f6d00cad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:22.477295 kubelet[3599]: E1124 00:12:22.477053 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:12:22.477528 containerd[1981]: time="2025-11-24T00:12:22.476661749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:12:22.519542 systemd-networkd[1746]: cali0729c46fe81: Gained IPv6LL Nov 24 00:12:22.570403 containerd[1981]: time="2025-11-24T00:12:22.569658705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bdc98bdb-v9btm,Uid:9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655,Namespace:calico-apiserver,Attempt:0,}" Nov 24 00:12:22.573283 containerd[1981]: time="2025-11-24T00:12:22.572654238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9bb64f948-hbf2v,Uid:02dedcc0-cbf6-46e5-bf8e-d29b3313eb81,Namespace:calico-system,Attempt:0,}" Nov 24 00:12:22.711138 systemd-networkd[1746]: cali531c328ab9c: Gained IPv6LL Nov 24 00:12:22.731364 containerd[1981]: time="2025-11-24T00:12:22.731236809Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:22.734470 containerd[1981]: time="2025-11-24T00:12:22.734160221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:12:22.734806 containerd[1981]: time="2025-11-24T00:12:22.734271860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:12:22.735676 kubelet[3599]: E1124 00:12:22.735285 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:22.735790 kubelet[3599]: E1124 00:12:22.735701 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:22.736139 kubelet[3599]: E1124 00:12:22.735913 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kngkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68bdc98bdb-jnjxv_calico-apiserver(abebab1e-f092-4a6b-94e1-1c92a233e08a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:22.738492 kubelet[3599]: E1124 00:12:22.738298 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:12:22.881171 systemd-networkd[1746]: cali64a34f28b11: Link UP Nov 24 00:12:22.883725 systemd-networkd[1746]: cali64a34f28b11: Gained carrier Nov 24 00:12:22.909287 kubelet[3599]: I1124 00:12:22.909224 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7cv4s" podStartSLOduration=52.909199585 podStartE2EDuration="52.909199585s" podCreationTimestamp="2025-11-24 00:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 00:12:22.422606734 +0000 UTC m=+56.016407531" watchObservedRunningTime="2025-11-24 00:12:22.909199585 +0000 UTC m=+56.503000382" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.684 [INFO][5315] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0 calico-kube-controllers-9bb64f948- calico-system 02dedcc0-cbf6-46e5-bf8e-d29b3313eb81 858 0 2025-11-24 00:11:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9bb64f948 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-28 calico-kube-controllers-9bb64f948-hbf2v eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali64a34f28b11 [] [] }} ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Namespace="calico-system" Pod="calico-kube-controllers-9bb64f948-hbf2v" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.685 [INFO][5315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Namespace="calico-system" Pod="calico-kube-controllers-9bb64f948-hbf2v" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.739 [INFO][5339] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" HandleID="k8s-pod-network.c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.740 [INFO][5339] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" HandleID="k8s-pod-network.c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5810), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"calico-kube-controllers-9bb64f948-hbf2v", "timestamp":"2025-11-24 00:12:22.739694909 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.740 [INFO][5339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.740 [INFO][5339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.740 [INFO][5339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.754 [INFO][5339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" host="ip-172-31-17-28" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.792 [INFO][5339] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-28" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.815 [INFO][5339] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.829 [INFO][5339] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.839 [INFO][5339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.839 [INFO][5339] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" host="ip-172-31-17-28" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.842 [INFO][5339] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089 Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.852 [INFO][5339] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" host="ip-172-31-17-28" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.870 [INFO][5339] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.120.70/26] block=192.168.120.64/26 handle="k8s-pod-network.c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" host="ip-172-31-17-28" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.870 [INFO][5339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.70/26] handle="k8s-pod-network.c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" host="ip-172-31-17-28" Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.870 [INFO][5339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:12:22.917314 containerd[1981]: 2025-11-24 00:12:22.870 [INFO][5339] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.120.70/26] IPv6=[] ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" HandleID="k8s-pod-network.c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Workload="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" Nov 24 00:12:22.919783 containerd[1981]: 2025-11-24 00:12:22.876 [INFO][5315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Namespace="calico-system" Pod="calico-kube-controllers-9bb64f948-hbf2v" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0", GenerateName:"calico-kube-controllers-9bb64f948-", Namespace:"calico-system", SelfLink:"", UID:"02dedcc0-cbf6-46e5-bf8e-d29b3313eb81", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9bb64f948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"calico-kube-controllers-9bb64f948-hbf2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali64a34f28b11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:22.919783 containerd[1981]: 2025-11-24 00:12:22.876 [INFO][5315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.70/32] ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Namespace="calico-system" Pod="calico-kube-controllers-9bb64f948-hbf2v" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" Nov 24 00:12:22.919783 containerd[1981]: 2025-11-24 00:12:22.876 [INFO][5315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64a34f28b11 ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Namespace="calico-system" Pod="calico-kube-controllers-9bb64f948-hbf2v" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" Nov 24 00:12:22.919783 containerd[1981]: 2025-11-24 00:12:22.888 [INFO][5315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Namespace="calico-system" Pod="calico-kube-controllers-9bb64f948-hbf2v" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" Nov 24 00:12:22.919783 containerd[1981]: 2025-11-24 00:12:22.889 [INFO][5315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Namespace="calico-system" Pod="calico-kube-controllers-9bb64f948-hbf2v" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0", GenerateName:"calico-kube-controllers-9bb64f948-", Namespace:"calico-system", SelfLink:"", UID:"02dedcc0-cbf6-46e5-bf8e-d29b3313eb81", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9bb64f948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089", Pod:"calico-kube-controllers-9bb64f948-hbf2v", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.120.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali64a34f28b11", MAC:"76:2c:b5:ff:6d:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:22.919783 containerd[1981]: 2025-11-24 00:12:22.911 [INFO][5315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" Namespace="calico-system" Pod="calico-kube-controllers-9bb64f948-hbf2v" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--kube--controllers--9bb64f948--hbf2v-eth0" Nov 24 00:12:22.990479 systemd-networkd[1746]: cali61731eef713: Link UP Nov 24 00:12:22.993941 systemd-networkd[1746]: cali61731eef713: Gained carrier Nov 24 00:12:23.028056 containerd[1981]: time="2025-11-24T00:12:23.027942682Z" level=info msg="connecting to shim c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089" address="unix:///run/containerd/s/1d4001bcd066ee0d92252ce2e720d65bcf1c1482967ee5799df27e7e8e9fd9d8" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.695 [INFO][5314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0 calico-apiserver-68bdc98bdb- calico-apiserver 9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655 853 0 2025-11-24 00:11:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68bdc98bdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-28 calico-apiserver-68bdc98bdb-v9btm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali61731eef713 [] [] }} ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-v9btm" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.696 [INFO][5314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-v9btm" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.820 [INFO][5344] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" HandleID="k8s-pod-network.245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Workload="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.820 [INFO][5344] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" HandleID="k8s-pod-network.245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Workload="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00045a900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-28", "pod":"calico-apiserver-68bdc98bdb-v9btm", "timestamp":"2025-11-24 00:12:22.820041865 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.821 [INFO][5344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.870 [INFO][5344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.871 [INFO][5344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.886 [INFO][5344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" host="ip-172-31-17-28" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.900 [INFO][5344] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-28" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.921 [INFO][5344] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.929 [INFO][5344] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.937 [INFO][5344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.939 [INFO][5344] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" host="ip-172-31-17-28" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.943 [INFO][5344] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4 Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.956 [INFO][5344] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" host="ip-172-31-17-28" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.980 [INFO][5344] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.120.71/26] block=192.168.120.64/26 handle="k8s-pod-network.245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" host="ip-172-31-17-28" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.981 [INFO][5344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.71/26] handle="k8s-pod-network.245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" host="ip-172-31-17-28" Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.981 [INFO][5344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:12:23.038717 containerd[1981]: 2025-11-24 00:12:22.981 [INFO][5344] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.120.71/26] IPv6=[] ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" HandleID="k8s-pod-network.245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Workload="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" Nov 24 00:12:23.039700 containerd[1981]: 2025-11-24 00:12:22.985 [INFO][5314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-v9btm" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0", GenerateName:"calico-apiserver-68bdc98bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bdc98bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"calico-apiserver-68bdc98bdb-v9btm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61731eef713", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:23.039700 containerd[1981]: 2025-11-24 00:12:22.985 [INFO][5314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.71/32] ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-v9btm" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" Nov 24 00:12:23.039700 containerd[1981]: 2025-11-24 00:12:22.985 [INFO][5314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61731eef713 ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-v9btm" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" Nov 24 00:12:23.039700 containerd[1981]: 2025-11-24 00:12:22.988 [INFO][5314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-v9btm" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" Nov 24 00:12:23.039700 containerd[1981]: 2025-11-24 00:12:22.989 [INFO][5314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-v9btm" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0", GenerateName:"calico-apiserver-68bdc98bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68bdc98bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4", Pod:"calico-apiserver-68bdc98bdb-v9btm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.120.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61731eef713", MAC:"4e:40:dd:a9:f3:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:23.039700 containerd[1981]: 2025-11-24 00:12:23.009 [INFO][5314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" Namespace="calico-apiserver" Pod="calico-apiserver-68bdc98bdb-v9btm" WorkloadEndpoint="ip--172--31--17--28-k8s-calico--apiserver--68bdc98bdb--v9btm-eth0" Nov 24 00:12:23.088318 systemd-networkd[1746]: vxlan.calico: Link UP Nov 24 00:12:23.088328 systemd-networkd[1746]: vxlan.calico: Gained carrier Nov 24 00:12:23.128579 systemd[1]: Started cri-containerd-c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089.scope - libcontainer container c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089. Nov 24 00:12:23.141142 containerd[1981]: time="2025-11-24T00:12:23.141087306Z" level=info msg="connecting to shim 245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4" address="unix:///run/containerd/s/06bcd6f969bfac4857ec9faac349662bcb151b00d6c66cef0f544ade44cc502a" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:12:23.155000 (udev-worker)[4678]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:12:23.236565 systemd[1]: Started cri-containerd-245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4.scope - libcontainer container 245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4. Nov 24 00:12:23.286128 kubelet[3599]: E1124 00:12:23.284203 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:12:23.294870 kubelet[3599]: E1124 00:12:23.293686 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:12:23.347909 containerd[1981]: time="2025-11-24T00:12:23.347513717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9bb64f948-hbf2v,Uid:02dedcc0-cbf6-46e5-bf8e-d29b3313eb81,Namespace:calico-system,Attempt:0,} returns sandbox id \"c407e83de233467b54c30d0e673783d97bfb3690948731cf5f3c83f655585089\"" Nov 24 00:12:23.354773 containerd[1981]: time="2025-11-24T00:12:23.354733453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:12:23.542637 containerd[1981]: time="2025-11-24T00:12:23.542007645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68bdc98bdb-v9btm,Uid:9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"245adf6f7c232b79e9590a9a84acfb29057fd5baf6dd237f776f7e2632a66ba4\"" Nov 24 00:12:23.568091 containerd[1981]: time="2025-11-24T00:12:23.567584740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9xvvr,Uid:9441d7ab-9ca0-4aa4-8c69-0bae216edd81,Namespace:calico-system,Attempt:0,}" Nov 24 00:12:23.614271 containerd[1981]: time="2025-11-24T00:12:23.614198595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:23.616857 containerd[1981]: time="2025-11-24T00:12:23.616743804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:12:23.617069 containerd[1981]: time="2025-11-24T00:12:23.616868743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:12:23.617453 kubelet[3599]: E1124 00:12:23.617317 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:12:23.617811 kubelet[3599]: E1124 00:12:23.617640 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:12:23.619020 containerd[1981]: time="2025-11-24T00:12:23.618820571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:12:23.620037 kubelet[3599]: E1124 00:12:23.619710 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvfx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9bb64f948-hbf2v_calico-system(02dedcc0-cbf6-46e5-bf8e-d29b3313eb81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:23.622158 kubelet[3599]: E1124 00:12:23.621940 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:12:23.811673 systemd-networkd[1746]: cali1187d846396: Link UP Nov 24 00:12:23.815146 systemd-networkd[1746]: cali1187d846396: Gained carrier Nov 24 00:12:23.821178 (udev-worker)[5457]: Network interface NamePolicy= disabled on kernel command line. Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.652 [INFO][5493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0 goldmane-666569f655- calico-system 9441d7ab-9ca0-4aa4-8c69-0bae216edd81 854 0 2025-11-24 00:11:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-28 goldmane-666569f655-9xvvr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1187d846396 [] [] }} ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Namespace="calico-system" Pod="goldmane-666569f655-9xvvr" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.653 [INFO][5493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Namespace="calico-system" Pod="goldmane-666569f655-9xvvr" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.714 [INFO][5504] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" HandleID="k8s-pod-network.a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Workload="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.715 [INFO][5504] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" HandleID="k8s-pod-network.a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Workload="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-28", "pod":"goldmane-666569f655-9xvvr", "timestamp":"2025-11-24 00:12:23.714336793 +0000 UTC"}, Hostname:"ip-172-31-17-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.715 [INFO][5504] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.715 [INFO][5504] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.715 [INFO][5504] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-28' Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.726 [INFO][5504] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" host="ip-172-31-17-28" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.736 [INFO][5504] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-28" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.747 [INFO][5504] ipam/ipam.go 511: Trying affinity for 192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.752 [INFO][5504] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.759 [INFO][5504] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.64/26 host="ip-172-31-17-28" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.759 [INFO][5504] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.120.64/26 handle="k8s-pod-network.a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" host="ip-172-31-17-28" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.768 [INFO][5504] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205 Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.785 [INFO][5504] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.120.64/26 handle="k8s-pod-network.a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" host="ip-172-31-17-28" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.799 [INFO][5504] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.120.72/26] block=192.168.120.64/26 handle="k8s-pod-network.a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" host="ip-172-31-17-28" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.799 [INFO][5504] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.72/26] handle="k8s-pod-network.a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" host="ip-172-31-17-28" Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.799 [INFO][5504] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 00:12:23.844550 containerd[1981]: 2025-11-24 00:12:23.799 [INFO][5504] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.120.72/26] IPv6=[] ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" HandleID="k8s-pod-network.a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Workload="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" Nov 24 00:12:23.847992 containerd[1981]: 2025-11-24 00:12:23.803 [INFO][5493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Namespace="calico-system" Pod="goldmane-666569f655-9xvvr" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9441d7ab-9ca0-4aa4-8c69-0bae216edd81", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"", Pod:"goldmane-666569f655-9xvvr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1187d846396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:23.847992 containerd[1981]: 2025-11-24 00:12:23.803 [INFO][5493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.72/32] ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Namespace="calico-system" Pod="goldmane-666569f655-9xvvr" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" Nov 24 00:12:23.847992 containerd[1981]: 2025-11-24 00:12:23.803 [INFO][5493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1187d846396 ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Namespace="calico-system" Pod="goldmane-666569f655-9xvvr" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" Nov 24 00:12:23.847992 containerd[1981]: 2025-11-24 00:12:23.815 [INFO][5493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Namespace="calico-system" Pod="goldmane-666569f655-9xvvr" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" Nov 24 00:12:23.847992 containerd[1981]: 2025-11-24 00:12:23.818 [INFO][5493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Namespace="calico-system" Pod="goldmane-666569f655-9xvvr" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9441d7ab-9ca0-4aa4-8c69-0bae216edd81", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 0, 11, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-28", ContainerID:"a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205", Pod:"goldmane-666569f655-9xvvr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.120.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1187d846396", MAC:"22:8d:ea:0c:c6:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 00:12:23.847992 containerd[1981]: 2025-11-24 00:12:23.837 [INFO][5493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" Namespace="calico-system" Pod="goldmane-666569f655-9xvvr" WorkloadEndpoint="ip--172--31--17--28-k8s-goldmane--666569f655--9xvvr-eth0" Nov 24 00:12:23.876635 containerd[1981]: time="2025-11-24T00:12:23.876105880Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:23.884100 containerd[1981]: time="2025-11-24T00:12:23.884039057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:12:23.884233 containerd[1981]: time="2025-11-24T00:12:23.884171144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:12:23.884451 kubelet[3599]: E1124 00:12:23.884411 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:23.887872 kubelet[3599]: E1124 00:12:23.884614 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:23.887872 kubelet[3599]: E1124 00:12:23.885033 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ck8tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68bdc98bdb-v9btm_calico-apiserver(9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:23.887872 kubelet[3599]: E1124 00:12:23.887744 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:12:23.904044 containerd[1981]: time="2025-11-24T00:12:23.903975005Z" level=info msg="connecting to shim a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205" address="unix:///run/containerd/s/549f16e62c0fe825be1c3d1a9b5691ce117df0c51ceeb9d58b2f1c77e25c1f21" namespace=k8s.io protocol=ttrpc version=3 Nov 24 00:12:23.962145 systemd[1]: Started cri-containerd-a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205.scope - libcontainer container a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205. Nov 24 00:12:24.119084 systemd-networkd[1746]: vxlan.calico: Gained IPv6LL Nov 24 00:12:24.137624 containerd[1981]: time="2025-11-24T00:12:24.137581137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9xvvr,Uid:9441d7ab-9ca0-4aa4-8c69-0bae216edd81,Namespace:calico-system,Attempt:0,} returns sandbox id \"a62b2989ef9ca637526c7cb18b32dbba02a8c87121c3cd3a4d134b69f0218205\"" Nov 24 00:12:24.140413 containerd[1981]: time="2025-11-24T00:12:24.140262668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:12:24.287390 kubelet[3599]: E1124 00:12:24.287334 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:12:24.312591 kubelet[3599]: E1124 00:12:24.312526 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:12:24.429103 containerd[1981]: time="2025-11-24T00:12:24.429049888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:24.431563 containerd[1981]: time="2025-11-24T00:12:24.431519537Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:12:24.431709 containerd[1981]: time="2025-11-24T00:12:24.431625140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:12:24.431841 kubelet[3599]: E1124 00:12:24.431797 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:12:24.431841 kubelet[3599]: E1124 00:12:24.431865 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:12:24.433310 kubelet[3599]: E1124 00:12:24.433232 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-285p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9xvvr_calico-system(9441d7ab-9ca0-4aa4-8c69-0bae216edd81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:24.435053 kubelet[3599]: E1124 00:12:24.435013 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:12:24.698259 systemd-networkd[1746]: cali64a34f28b11: Gained IPv6LL Nov 24 00:12:24.759125 systemd-networkd[1746]: cali61731eef713: Gained IPv6LL Nov 24 00:12:25.301148 kubelet[3599]: E1124 00:12:25.301014 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:12:25.301148 kubelet[3599]: E1124 00:12:25.301104 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:12:25.305600 kubelet[3599]: E1124 00:12:25.301417 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:12:25.849640 systemd-networkd[1746]: cali1187d846396: Gained IPv6LL Nov 24 00:12:25.921598 systemd[1]: Started sshd@9-172.31.17.28:22-139.178.68.195:56198.service - OpenSSH per-connection server daemon (139.178.68.195:56198). Nov 24 00:12:26.232546 sshd[5617]: Accepted publickey for core from 139.178.68.195 port 56198 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:26.239125 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:26.247417 systemd-logind[1955]: New session 10 of user core. Nov 24 00:12:26.253929 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 00:12:27.655207 sshd[5620]: Connection closed by 139.178.68.195 port 56198 Nov 24 00:12:27.655088 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:27.668298 systemd[1]: sshd@9-172.31.17.28:22-139.178.68.195:56198.service: Deactivated successfully. Nov 24 00:12:27.672554 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 00:12:27.678722 systemd-logind[1955]: Session 10 logged out. Waiting for processes to exit. Nov 24 00:12:27.681228 systemd-logind[1955]: Removed session 10. Nov 24 00:12:28.185351 ntpd[2152]: Listen normally on 6 vxlan.calico 192.168.120.64:123 Nov 24 00:12:28.185436 ntpd[2152]: Listen normally on 7 cali60e49e39745 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 6 vxlan.calico 192.168.120.64:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 7 cali60e49e39745 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 8 cali360e4b0995f [fe80::ecee:eeff:feee:eeee%5]:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 9 cali0729c46fe81 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 10 cali9f32bc39f10 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 11 cali531c328ab9c [fe80::ecee:eeff:feee:eeee%8]:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 12 cali64a34f28b11 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 13 cali61731eef713 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 14 vxlan.calico [fe80::6482:21ff:fecf:b241%11]:123 Nov 24 00:12:28.186839 ntpd[2152]: 24 Nov 00:12:28 ntpd[2152]: Listen normally on 15 cali1187d846396 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 24 00:12:28.185469 ntpd[2152]: Listen normally on 8 cali360e4b0995f [fe80::ecee:eeff:feee:eeee%5]:123 Nov 24 00:12:28.185497 ntpd[2152]: Listen normally on 9 cali0729c46fe81 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 24 00:12:28.185524 ntpd[2152]: Listen normally on 10 cali9f32bc39f10 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 24 00:12:28.185551 ntpd[2152]: Listen normally on 11 cali531c328ab9c [fe80::ecee:eeff:feee:eeee%8]:123 Nov 24 00:12:28.185581 ntpd[2152]: Listen normally on 12 cali64a34f28b11 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 24 00:12:28.185610 ntpd[2152]: Listen normally on 13 cali61731eef713 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 24 00:12:28.185638 ntpd[2152]: Listen normally on 14 vxlan.calico [fe80::6482:21ff:fecf:b241%11]:123 Nov 24 00:12:28.185670 ntpd[2152]: Listen normally on 15 cali1187d846396 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 24 00:12:32.700309 systemd[1]: Started sshd@10-172.31.17.28:22-139.178.68.195:55328.service - OpenSSH per-connection server daemon (139.178.68.195:55328). Nov 24 00:12:32.906270 sshd[5650]: Accepted publickey for core from 139.178.68.195 port 55328 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:32.908049 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:32.913669 systemd-logind[1955]: New session 11 of user core. Nov 24 00:12:32.921287 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 00:12:33.143718 sshd[5653]: Connection closed by 139.178.68.195 port 55328 Nov 24 00:12:33.143969 sshd-session[5650]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:33.149604 systemd[1]: sshd@10-172.31.17.28:22-139.178.68.195:55328.service: Deactivated successfully. Nov 24 00:12:33.149815 systemd-logind[1955]: Session 11 logged out. Waiting for processes to exit. Nov 24 00:12:33.152465 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 00:12:33.155334 systemd-logind[1955]: Removed session 11. Nov 24 00:12:33.569175 containerd[1981]: time="2025-11-24T00:12:33.568969032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:12:33.871456 containerd[1981]: time="2025-11-24T00:12:33.871204697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:33.873694 containerd[1981]: time="2025-11-24T00:12:33.873563474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:12:33.873694 containerd[1981]: time="2025-11-24T00:12:33.873595079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:12:33.874069 kubelet[3599]: E1124 00:12:33.874019 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:12:33.874527 kubelet[3599]: E1124 00:12:33.874076 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:12:33.874527 kubelet[3599]: E1124 00:12:33.874238 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:33.877124 containerd[1981]: time="2025-11-24T00:12:33.877059376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:12:34.150473 containerd[1981]: time="2025-11-24T00:12:34.150423727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:34.153033 containerd[1981]: time="2025-11-24T00:12:34.152966747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:12:34.153199 containerd[1981]: time="2025-11-24T00:12:34.152970657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:12:34.153409 kubelet[3599]: E1124 00:12:34.153304 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:12:34.153409 kubelet[3599]: E1124 00:12:34.153369 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:12:34.153577 kubelet[3599]: E1124 00:12:34.153518 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:34.154961 kubelet[3599]: E1124 00:12:34.154894 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:12:34.571366 containerd[1981]: time="2025-11-24T00:12:34.570363271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:12:34.829765 containerd[1981]: time="2025-11-24T00:12:34.829538635Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:34.831784 containerd[1981]: time="2025-11-24T00:12:34.831702714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:12:34.831943 containerd[1981]: time="2025-11-24T00:12:34.831800482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:12:34.832027 kubelet[3599]: E1124 00:12:34.831966 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:34.832027 kubelet[3599]: E1124 00:12:34.832021 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:34.832427 kubelet[3599]: E1124 00:12:34.832332 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kngkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68bdc98bdb-jnjxv_calico-apiserver(abebab1e-f092-4a6b-94e1-1c92a233e08a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:34.834172 kubelet[3599]: E1124 00:12:34.834114 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:12:36.571831 containerd[1981]: time="2025-11-24T00:12:36.571694723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:12:36.832425 containerd[1981]: time="2025-11-24T00:12:36.832289909Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:36.834645 containerd[1981]: time="2025-11-24T00:12:36.834563168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:12:36.834645 containerd[1981]: time="2025-11-24T00:12:36.834615869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:12:36.834930 kubelet[3599]: E1124 00:12:36.834835 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:12:36.839998 kubelet[3599]: E1124 00:12:36.834941 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:12:36.839998 kubelet[3599]: E1124 00:12:36.835089 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7637e2ea3ae94bb89edf74c1cba02e3f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d64lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67965d874b-g8xwp_calico-system(036bfdfd-8582-4bd8-b46a-aee9f6d00cad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:36.840917 containerd[1981]: time="2025-11-24T00:12:36.840806581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:12:37.116310 containerd[1981]: time="2025-11-24T00:12:37.116064047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:37.118768 containerd[1981]: time="2025-11-24T00:12:37.118613941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:12:37.118768 containerd[1981]: time="2025-11-24T00:12:37.118728366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:12:37.118978 kubelet[3599]: E1124 00:12:37.118926 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:12:37.118978 kubelet[3599]: E1124 00:12:37.118972 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:12:37.119117 kubelet[3599]: E1124 00:12:37.119079 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d64lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67965d874b-g8xwp_calico-system(036bfdfd-8582-4bd8-b46a-aee9f6d00cad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:37.120662 kubelet[3599]: E1124 00:12:37.120585 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:12:37.570331 containerd[1981]: time="2025-11-24T00:12:37.570189156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:12:37.841021 containerd[1981]: time="2025-11-24T00:12:37.839963302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:37.842972 containerd[1981]: time="2025-11-24T00:12:37.842899441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:12:37.843195 containerd[1981]: time="2025-11-24T00:12:37.842962325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:12:37.843652 kubelet[3599]: E1124 00:12:37.843374 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:37.844444 kubelet[3599]: E1124 00:12:37.843681 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:37.844444 kubelet[3599]: E1124 00:12:37.843884 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ck8tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68bdc98bdb-v9btm_calico-apiserver(9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:37.845263 kubelet[3599]: E1124 00:12:37.845213 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:12:38.179381 systemd[1]: Started sshd@11-172.31.17.28:22-139.178.68.195:55344.service - OpenSSH per-connection server daemon (139.178.68.195:55344). Nov 24 00:12:38.370694 sshd[5674]: Accepted publickey for core from 139.178.68.195 port 55344 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:38.372944 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:38.380076 systemd-logind[1955]: New session 12 of user core. Nov 24 00:12:38.387135 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 00:12:38.571079 containerd[1981]: time="2025-11-24T00:12:38.570645973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:12:38.630110 sshd[5677]: Connection closed by 139.178.68.195 port 55344 Nov 24 00:12:38.631455 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:38.635614 systemd[1]: sshd@11-172.31.17.28:22-139.178.68.195:55344.service: Deactivated successfully. Nov 24 00:12:38.638598 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 00:12:38.641758 systemd-logind[1955]: Session 12 logged out. Waiting for processes to exit. Nov 24 00:12:38.643594 systemd-logind[1955]: Removed session 12. Nov 24 00:12:38.665713 systemd[1]: Started sshd@12-172.31.17.28:22-139.178.68.195:55358.service - OpenSSH per-connection server daemon (139.178.68.195:55358). Nov 24 00:12:38.812666 containerd[1981]: time="2025-11-24T00:12:38.812603840Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:38.815353 containerd[1981]: time="2025-11-24T00:12:38.815259574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:12:38.815794 containerd[1981]: time="2025-11-24T00:12:38.815279596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:12:38.816071 kubelet[3599]: E1124 00:12:38.815932 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:12:38.816071 kubelet[3599]: E1124 00:12:38.816059 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:12:38.817158 kubelet[3599]: E1124 00:12:38.816628 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvfx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9bb64f948-hbf2v_calico-system(02dedcc0-cbf6-46e5-bf8e-d29b3313eb81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:38.817917 kubelet[3599]: E1124 00:12:38.817836 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:12:38.902890 sshd[5690]: Accepted publickey for core from 139.178.68.195 port 55358 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:38.905185 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:38.911683 systemd-logind[1955]: New session 13 of user core. Nov 24 00:12:38.919127 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 00:12:39.203135 sshd[5693]: Connection closed by 139.178.68.195 port 55358 Nov 24 00:12:39.206271 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:39.219439 systemd-logind[1955]: Session 13 logged out. Waiting for processes to exit. Nov 24 00:12:39.220976 systemd[1]: sshd@12-172.31.17.28:22-139.178.68.195:55358.service: Deactivated successfully. Nov 24 00:12:39.227265 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 00:12:39.244983 systemd-logind[1955]: Removed session 13. Nov 24 00:12:39.247434 systemd[1]: Started sshd@13-172.31.17.28:22-139.178.68.195:55368.service - OpenSSH per-connection server daemon (139.178.68.195:55368). Nov 24 00:12:39.482964 sshd[5702]: Accepted publickey for core from 139.178.68.195 port 55368 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:39.488650 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:39.505942 systemd-logind[1955]: New session 14 of user core. Nov 24 00:12:39.511471 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 00:12:39.794567 sshd[5705]: Connection closed by 139.178.68.195 port 55368 Nov 24 00:12:39.795059 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:39.801778 systemd[1]: sshd@13-172.31.17.28:22-139.178.68.195:55368.service: Deactivated successfully. Nov 24 00:12:39.805126 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 00:12:39.806783 systemd-logind[1955]: Session 14 logged out. Waiting for processes to exit. Nov 24 00:12:39.808742 systemd-logind[1955]: Removed session 14. Nov 24 00:12:40.570869 containerd[1981]: time="2025-11-24T00:12:40.570727528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:12:40.826832 containerd[1981]: time="2025-11-24T00:12:40.826686518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:40.830140 containerd[1981]: time="2025-11-24T00:12:40.830046298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:12:40.831551 containerd[1981]: time="2025-11-24T00:12:40.830061841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:12:40.831751 kubelet[3599]: E1124 00:12:40.830415 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:12:40.831751 kubelet[3599]: E1124 00:12:40.830469 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:12:40.832964 kubelet[3599]: E1124 00:12:40.830713 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-285p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9xvvr_calico-system(9441d7ab-9ca0-4aa4-8c69-0bae216edd81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:40.835933 kubelet[3599]: E1124 00:12:40.835881 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:12:44.829430 systemd[1]: Started sshd@14-172.31.17.28:22-139.178.68.195:49258.service - OpenSSH per-connection server daemon (139.178.68.195:49258). Nov 24 00:12:45.007703 sshd[5730]: Accepted publickey for core from 139.178.68.195 port 49258 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:45.010325 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:45.019686 systemd-logind[1955]: New session 15 of user core. Nov 24 00:12:45.025449 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 00:12:45.370433 sshd[5733]: Connection closed by 139.178.68.195 port 49258 Nov 24 00:12:45.371177 sshd-session[5730]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:45.375748 systemd[1]: sshd@14-172.31.17.28:22-139.178.68.195:49258.service: Deactivated successfully. Nov 24 00:12:45.379192 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 00:12:45.382072 systemd-logind[1955]: Session 15 logged out. Waiting for processes to exit. Nov 24 00:12:45.383784 systemd-logind[1955]: Removed session 15. Nov 24 00:12:46.570961 kubelet[3599]: E1124 00:12:46.570885 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:12:47.569446 kubelet[3599]: E1124 00:12:47.569368 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:12:50.412826 systemd[1]: Started sshd@15-172.31.17.28:22-139.178.68.195:39544.service - OpenSSH per-connection server daemon (139.178.68.195:39544). Nov 24 00:12:50.575807 kubelet[3599]: E1124 00:12:50.575628 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:12:50.646239 sshd[5745]: Accepted publickey for core from 139.178.68.195 port 39544 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:50.649708 sshd-session[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:50.656980 systemd-logind[1955]: New session 16 of user core. Nov 24 00:12:50.664972 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 00:12:50.927005 sshd[5748]: Connection closed by 139.178.68.195 port 39544 Nov 24 00:12:50.928297 sshd-session[5745]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:50.934271 systemd[1]: sshd@15-172.31.17.28:22-139.178.68.195:39544.service: Deactivated successfully. Nov 24 00:12:50.937439 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 00:12:50.939224 systemd-logind[1955]: Session 16 logged out. Waiting for processes to exit. Nov 24 00:12:50.942064 systemd-logind[1955]: Removed session 16. Nov 24 00:12:51.569366 kubelet[3599]: E1124 00:12:51.569161 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:12:51.570044 kubelet[3599]: E1124 00:12:51.569426 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:12:55.964239 systemd[1]: Started sshd@16-172.31.17.28:22-139.178.68.195:39554.service - OpenSSH per-connection server daemon (139.178.68.195:39554). Nov 24 00:12:56.310531 sshd[5785]: Accepted publickey for core from 139.178.68.195 port 39554 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:56.316883 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:56.327411 systemd-logind[1955]: New session 17 of user core. Nov 24 00:12:56.333153 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 00:12:56.577613 kubelet[3599]: E1124 00:12:56.577242 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:12:56.729074 sshd[5788]: Connection closed by 139.178.68.195 port 39554 Nov 24 00:12:56.729927 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:56.735569 systemd[1]: sshd@16-172.31.17.28:22-139.178.68.195:39554.service: Deactivated successfully. Nov 24 00:12:56.738582 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 00:12:56.739621 systemd-logind[1955]: Session 17 logged out. Waiting for processes to exit. Nov 24 00:12:56.742829 systemd-logind[1955]: Removed session 17. Nov 24 00:12:56.767722 systemd[1]: Started sshd@17-172.31.17.28:22-139.178.68.195:39570.service - OpenSSH per-connection server daemon (139.178.68.195:39570). Nov 24 00:12:56.987824 sshd[5800]: Accepted publickey for core from 139.178.68.195 port 39570 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:56.990058 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:56.996379 systemd-logind[1955]: New session 18 of user core. Nov 24 00:12:57.006154 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 00:12:57.762054 sshd[5803]: Connection closed by 139.178.68.195 port 39570 Nov 24 00:12:57.763604 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:57.769288 systemd[1]: sshd@17-172.31.17.28:22-139.178.68.195:39570.service: Deactivated successfully. Nov 24 00:12:57.772510 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 00:12:57.773737 systemd-logind[1955]: Session 18 logged out. Waiting for processes to exit. Nov 24 00:12:57.775916 systemd-logind[1955]: Removed session 18. Nov 24 00:12:57.798511 systemd[1]: Started sshd@18-172.31.17.28:22-139.178.68.195:39578.service - OpenSSH per-connection server daemon (139.178.68.195:39578). Nov 24 00:12:58.044046 sshd[5813]: Accepted publickey for core from 139.178.68.195 port 39578 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:58.051662 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:58.071893 systemd-logind[1955]: New session 19 of user core. Nov 24 00:12:58.078131 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 00:12:58.579676 containerd[1981]: time="2025-11-24T00:12:58.577691572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:12:58.869773 containerd[1981]: time="2025-11-24T00:12:58.869620388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:58.872356 containerd[1981]: time="2025-11-24T00:12:58.872020918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:12:58.872559 containerd[1981]: time="2025-11-24T00:12:58.872418129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:12:58.872699 kubelet[3599]: E1124 00:12:58.872621 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:12:58.872699 kubelet[3599]: E1124 00:12:58.872676 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:12:58.874465 containerd[1981]: time="2025-11-24T00:12:58.874422517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:12:58.876464 kubelet[3599]: E1124 00:12:58.876124 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:59.146202 sshd[5816]: Connection closed by 139.178.68.195 port 39578 Nov 24 00:12:59.147837 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Nov 24 00:12:59.158020 systemd-logind[1955]: Session 19 logged out. Waiting for processes to exit. Nov 24 00:12:59.159741 systemd[1]: sshd@18-172.31.17.28:22-139.178.68.195:39578.service: Deactivated successfully. Nov 24 00:12:59.164814 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 00:12:59.172872 containerd[1981]: time="2025-11-24T00:12:59.171693709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:59.174067 containerd[1981]: time="2025-11-24T00:12:59.173999516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:12:59.174067 containerd[1981]: time="2025-11-24T00:12:59.174037386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:12:59.174416 kubelet[3599]: E1124 00:12:59.174377 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:59.174509 kubelet[3599]: E1124 00:12:59.174428 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:12:59.174716 kubelet[3599]: E1124 00:12:59.174665 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kngkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68bdc98bdb-jnjxv_calico-apiserver(abebab1e-f092-4a6b-94e1-1c92a233e08a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:59.175312 containerd[1981]: time="2025-11-24T00:12:59.175281423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:12:59.176796 kubelet[3599]: E1124 00:12:59.176744 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:12:59.189492 systemd-logind[1955]: Removed session 19. Nov 24 00:12:59.192178 systemd[1]: Started sshd@19-172.31.17.28:22-139.178.68.195:39586.service - OpenSSH per-connection server daemon (139.178.68.195:39586). Nov 24 00:12:59.402821 sshd[5835]: Accepted publickey for core from 139.178.68.195 port 39586 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:12:59.406314 sshd-session[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:12:59.416540 systemd-logind[1955]: New session 20 of user core. Nov 24 00:12:59.431090 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 00:12:59.433701 containerd[1981]: time="2025-11-24T00:12:59.433630896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:12:59.437044 containerd[1981]: time="2025-11-24T00:12:59.436936312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:12:59.437385 containerd[1981]: time="2025-11-24T00:12:59.437143235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:12:59.437960 kubelet[3599]: E1124 00:12:59.437783 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:12:59.438146 kubelet[3599]: E1124 00:12:59.438112 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:12:59.438533 kubelet[3599]: E1124 00:12:59.438473 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:12:59.441575 kubelet[3599]: E1124 00:12:59.441503 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:13:00.372469 sshd[5838]: Connection closed by 139.178.68.195 port 39586 Nov 24 00:13:00.375481 sshd-session[5835]: pam_unix(sshd:session): session closed for user core Nov 24 00:13:00.383656 systemd-logind[1955]: Session 20 logged out. Waiting for processes to exit. Nov 24 00:13:00.385349 systemd[1]: sshd@19-172.31.17.28:22-139.178.68.195:39586.service: Deactivated successfully. Nov 24 00:13:00.392502 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 00:13:00.415725 systemd-logind[1955]: Removed session 20. Nov 24 00:13:00.417769 systemd[1]: Started sshd@20-172.31.17.28:22-139.178.68.195:45056.service - OpenSSH per-connection server daemon (139.178.68.195:45056). Nov 24 00:13:00.608026 sshd[5848]: Accepted publickey for core from 139.178.68.195 port 45056 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:13:00.610179 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:13:00.618752 systemd-logind[1955]: New session 21 of user core. Nov 24 00:13:00.633074 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 00:13:00.970435 sshd[5851]: Connection closed by 139.178.68.195 port 45056 Nov 24 00:13:00.972598 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Nov 24 00:13:00.982622 systemd[1]: sshd@20-172.31.17.28:22-139.178.68.195:45056.service: Deactivated successfully. Nov 24 00:13:00.985693 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 00:13:00.989118 systemd-logind[1955]: Session 21 logged out. Waiting for processes to exit. Nov 24 00:13:00.991191 systemd-logind[1955]: Removed session 21. Nov 24 00:13:04.579867 containerd[1981]: time="2025-11-24T00:13:04.577805749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:13:04.881226 containerd[1981]: time="2025-11-24T00:13:04.881058464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:04.884667 containerd[1981]: time="2025-11-24T00:13:04.883980337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:13:04.884667 containerd[1981]: time="2025-11-24T00:13:04.884569163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:13:04.884921 kubelet[3599]: E1124 00:13:04.884812 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:13:04.884921 kubelet[3599]: E1124 00:13:04.884896 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:13:04.885384 kubelet[3599]: E1124 00:13:04.885030 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7637e2ea3ae94bb89edf74c1cba02e3f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d64lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67965d874b-g8xwp_calico-system(036bfdfd-8582-4bd8-b46a-aee9f6d00cad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:04.889827 containerd[1981]: time="2025-11-24T00:13:04.889384596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:13:05.140943 containerd[1981]: time="2025-11-24T00:13:05.140487047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:05.143265 containerd[1981]: time="2025-11-24T00:13:05.143174091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:13:05.143467 containerd[1981]: time="2025-11-24T00:13:05.143185059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:13:05.143566 kubelet[3599]: E1124 00:13:05.143496 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:13:05.143631 kubelet[3599]: E1124 00:13:05.143575 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:13:05.143857 kubelet[3599]: E1124 00:13:05.143776 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d64lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67965d874b-g8xwp_calico-system(036bfdfd-8582-4bd8-b46a-aee9f6d00cad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:05.145220 kubelet[3599]: E1124 00:13:05.145167 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:13:05.569685 containerd[1981]: time="2025-11-24T00:13:05.569281650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:13:05.854986 containerd[1981]: time="2025-11-24T00:13:05.854836603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:05.857629 containerd[1981]: time="2025-11-24T00:13:05.857569849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:13:05.857797 containerd[1981]: time="2025-11-24T00:13:05.857604983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:13:05.858901 kubelet[3599]: E1124 00:13:05.857913 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:13:05.858901 kubelet[3599]: E1124 00:13:05.858002 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:13:05.859220 kubelet[3599]: E1124 00:13:05.859150 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvfx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9bb64f948-hbf2v_calico-system(02dedcc0-cbf6-46e5-bf8e-d29b3313eb81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:05.859916 containerd[1981]: time="2025-11-24T00:13:05.859885537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:13:05.860962 kubelet[3599]: E1124 00:13:05.860916 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:13:06.007800 systemd[1]: Started sshd@21-172.31.17.28:22-139.178.68.195:45062.service - OpenSSH per-connection server daemon (139.178.68.195:45062). Nov 24 00:13:06.139957 containerd[1981]: time="2025-11-24T00:13:06.139737252Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:06.168355 containerd[1981]: time="2025-11-24T00:13:06.153414240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:13:06.168535 containerd[1981]: time="2025-11-24T00:13:06.168287352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:13:06.169176 kubelet[3599]: E1124 00:13:06.169130 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:13:06.170284 kubelet[3599]: E1124 00:13:06.169654 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:13:06.170284 kubelet[3599]: E1124 00:13:06.169822 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ck8tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68bdc98bdb-v9btm_calico-apiserver(9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:06.171793 kubelet[3599]: E1124 00:13:06.171691 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:13:06.259575 sshd[5875]: Accepted publickey for core from 139.178.68.195 port 45062 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:13:06.262130 sshd-session[5875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:13:06.268798 systemd-logind[1955]: New session 22 of user core. Nov 24 00:13:06.275429 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 00:13:06.807602 sshd[5878]: Connection closed by 139.178.68.195 port 45062 Nov 24 00:13:06.810074 sshd-session[5875]: pam_unix(sshd:session): session closed for user core Nov 24 00:13:06.836825 systemd[1]: sshd@21-172.31.17.28:22-139.178.68.195:45062.service: Deactivated successfully. Nov 24 00:13:06.852474 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 00:13:06.857939 systemd-logind[1955]: Session 22 logged out. Waiting for processes to exit. Nov 24 00:13:06.862715 systemd-logind[1955]: Removed session 22. Nov 24 00:13:10.569755 kubelet[3599]: E1124 00:13:10.569605 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:13:10.573175 containerd[1981]: time="2025-11-24T00:13:10.570534392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:13:10.856111 containerd[1981]: time="2025-11-24T00:13:10.855674687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:10.858472 containerd[1981]: time="2025-11-24T00:13:10.858406217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:13:10.858608 containerd[1981]: time="2025-11-24T00:13:10.858508095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:13:10.858763 kubelet[3599]: E1124 00:13:10.858719 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:13:10.858892 kubelet[3599]: E1124 00:13:10.858782 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:13:10.859057 kubelet[3599]: E1124 00:13:10.858977 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-285p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9xvvr_calico-system(9441d7ab-9ca0-4aa4-8c69-0bae216edd81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:10.862397 kubelet[3599]: E1124 00:13:10.860584 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:13:11.848036 systemd[1]: Started sshd@22-172.31.17.28:22-139.178.68.195:45226.service - OpenSSH per-connection server daemon (139.178.68.195:45226). Nov 24 00:13:12.041047 sshd[5890]: Accepted publickey for core from 139.178.68.195 port 45226 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:13:12.043072 sshd-session[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:13:12.050916 systemd-logind[1955]: New session 23 of user core. Nov 24 00:13:12.058112 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 00:13:12.258166 sshd[5893]: Connection closed by 139.178.68.195 port 45226 Nov 24 00:13:12.260570 sshd-session[5890]: pam_unix(sshd:session): session closed for user core Nov 24 00:13:12.267748 systemd-logind[1955]: Session 23 logged out. Waiting for processes to exit. Nov 24 00:13:12.268521 systemd[1]: sshd@22-172.31.17.28:22-139.178.68.195:45226.service: Deactivated successfully. Nov 24 00:13:12.273025 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 00:13:12.276461 systemd-logind[1955]: Removed session 23. Nov 24 00:13:13.573869 kubelet[3599]: E1124 00:13:13.573801 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:13:17.314194 systemd[1]: Started sshd@23-172.31.17.28:22-139.178.68.195:45236.service - OpenSSH per-connection server daemon (139.178.68.195:45236). Nov 24 00:13:17.510254 sshd[5905]: Accepted publickey for core from 139.178.68.195 port 45236 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:13:17.513856 sshd-session[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:13:17.521230 systemd-logind[1955]: New session 24 of user core. Nov 24 00:13:17.529121 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 24 00:13:17.574281 kubelet[3599]: E1124 00:13:17.573991 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:13:17.578163 kubelet[3599]: E1124 00:13:17.578113 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:13:17.774264 sshd[5908]: Connection closed by 139.178.68.195 port 45236 Nov 24 00:13:17.776551 sshd-session[5905]: pam_unix(sshd:session): session closed for user core Nov 24 00:13:17.782107 systemd[1]: sshd@23-172.31.17.28:22-139.178.68.195:45236.service: Deactivated successfully. Nov 24 00:13:17.785810 systemd[1]: session-24.scope: Deactivated successfully. Nov 24 00:13:17.789038 systemd-logind[1955]: Session 24 logged out. Waiting for processes to exit. Nov 24 00:13:17.791535 systemd-logind[1955]: Removed session 24. Nov 24 00:13:19.570545 kubelet[3599]: E1124 00:13:19.570496 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:13:22.827807 systemd[1]: Started sshd@24-172.31.17.28:22-139.178.68.195:42998.service - OpenSSH per-connection server daemon (139.178.68.195:42998). Nov 24 00:13:23.116499 sshd[5945]: Accepted publickey for core from 139.178.68.195 port 42998 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:13:23.120357 sshd-session[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:13:23.128294 systemd-logind[1955]: New session 25 of user core. Nov 24 00:13:23.136184 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 24 00:13:23.526984 sshd[5949]: Connection closed by 139.178.68.195 port 42998 Nov 24 00:13:23.529130 sshd-session[5945]: pam_unix(sshd:session): session closed for user core Nov 24 00:13:23.543667 systemd[1]: sshd@24-172.31.17.28:22-139.178.68.195:42998.service: Deactivated successfully. Nov 24 00:13:23.555680 systemd[1]: session-25.scope: Deactivated successfully. Nov 24 00:13:23.560265 systemd-logind[1955]: Session 25 logged out. Waiting for processes to exit. Nov 24 00:13:23.562670 systemd-logind[1955]: Removed session 25. Nov 24 00:13:23.570476 kubelet[3599]: E1124 00:13:23.570427 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:13:24.572459 kubelet[3599]: E1124 00:13:24.572381 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:13:25.569692 kubelet[3599]: E1124 00:13:25.569588 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:13:28.580621 systemd[1]: Started sshd@25-172.31.17.28:22-139.178.68.195:43000.service - OpenSSH per-connection server daemon (139.178.68.195:43000). Nov 24 00:13:28.783011 sshd[5964]: Accepted publickey for core from 139.178.68.195 port 43000 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:13:28.785632 sshd-session[5964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:13:28.795120 systemd-logind[1955]: New session 26 of user core. Nov 24 00:13:28.801138 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 24 00:13:29.044871 sshd[5967]: Connection closed by 139.178.68.195 port 43000 Nov 24 00:13:29.046076 sshd-session[5964]: pam_unix(sshd:session): session closed for user core Nov 24 00:13:29.052084 systemd[1]: sshd@25-172.31.17.28:22-139.178.68.195:43000.service: Deactivated successfully. Nov 24 00:13:29.056072 systemd[1]: session-26.scope: Deactivated successfully. Nov 24 00:13:29.059116 systemd-logind[1955]: Session 26 logged out. Waiting for processes to exit. Nov 24 00:13:29.062757 systemd-logind[1955]: Removed session 26. Nov 24 00:13:29.569383 kubelet[3599]: E1124 00:13:29.569340 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:13:30.575233 kubelet[3599]: E1124 00:13:30.575109 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:13:32.571282 kubelet[3599]: E1124 00:13:32.571221 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:13:34.084403 systemd[1]: Started sshd@26-172.31.17.28:22-139.178.68.195:45138.service - OpenSSH per-connection server daemon (139.178.68.195:45138). Nov 24 00:13:34.284356 sshd[5982]: Accepted publickey for core from 139.178.68.195 port 45138 ssh2: RSA SHA256:Pp7uWNgkT6o/c2/MqDcUdGGYmK/xCuy/eKvi/2IGUvk Nov 24 00:13:34.287699 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 00:13:34.296221 systemd-logind[1955]: New session 27 of user core. Nov 24 00:13:34.304105 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 24 00:13:34.578377 sshd[5985]: Connection closed by 139.178.68.195 port 45138 Nov 24 00:13:34.579165 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Nov 24 00:13:34.586166 systemd-logind[1955]: Session 27 logged out. Waiting for processes to exit. Nov 24 00:13:34.586699 systemd[1]: sshd@26-172.31.17.28:22-139.178.68.195:45138.service: Deactivated successfully. Nov 24 00:13:34.591225 systemd[1]: session-27.scope: Deactivated successfully. Nov 24 00:13:34.594900 systemd-logind[1955]: Removed session 27. Nov 24 00:13:35.570375 kubelet[3599]: E1124 00:13:35.570312 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:13:37.569414 kubelet[3599]: E1124 00:13:37.569320 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:13:40.571076 containerd[1981]: time="2025-11-24T00:13:40.570908033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:13:40.845666 containerd[1981]: time="2025-11-24T00:13:40.845460931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:40.848712 containerd[1981]: time="2025-11-24T00:13:40.848346666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:13:40.849256 containerd[1981]: time="2025-11-24T00:13:40.848349626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:13:40.849373 kubelet[3599]: E1124 00:13:40.849289 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:13:40.849373 kubelet[3599]: E1124 00:13:40.849350 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:13:40.851746 kubelet[3599]: E1124 00:13:40.849528 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kngkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68bdc98bdb-jnjxv_calico-apiserver(abebab1e-f092-4a6b-94e1-1c92a233e08a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:40.851746 kubelet[3599]: E1124 00:13:40.851020 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:13:41.571586 kubelet[3599]: E1124 00:13:41.571417 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:13:44.569677 kubelet[3599]: E1124 00:13:44.569505 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:13:46.568673 containerd[1981]: time="2025-11-24T00:13:46.568636395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 00:13:46.842801 containerd[1981]: time="2025-11-24T00:13:46.842669145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:46.844751 containerd[1981]: time="2025-11-24T00:13:46.844680937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 00:13:46.844969 containerd[1981]: time="2025-11-24T00:13:46.844776916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 00:13:46.845009 kubelet[3599]: E1124 00:13:46.844940 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:13:46.845009 kubelet[3599]: E1124 00:13:46.844986 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 00:13:46.846538 kubelet[3599]: E1124 00:13:46.845113 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvfx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-9bb64f948-hbf2v_calico-system(02dedcc0-cbf6-46e5-bf8e-d29b3313eb81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:46.847938 kubelet[3599]: E1124 00:13:46.847897 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:13:48.458892 systemd[1]: cri-containerd-ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07.scope: Deactivated successfully. Nov 24 00:13:48.459733 systemd[1]: cri-containerd-ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07.scope: Consumed 11.707s CPU time, 108M memory peak, 40.2M read from disk. Nov 24 00:13:48.490400 containerd[1981]: time="2025-11-24T00:13:48.490341492Z" level=info msg="received container exit event container_id:\"ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07\" id:\"ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07\" pid:3971 exit_status:1 exited_at:{seconds:1763943228 nanos:458129074}" Nov 24 00:13:48.528626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07-rootfs.mount: Deactivated successfully. Nov 24 00:13:48.797430 kubelet[3599]: I1124 00:13:48.797315 3599 scope.go:117] "RemoveContainer" containerID="2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700" Nov 24 00:13:48.797899 kubelet[3599]: I1124 00:13:48.797629 3599 scope.go:117] "RemoveContainer" containerID="ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07" Nov 24 00:13:48.798032 kubelet[3599]: E1124 00:13:48.797971 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-rjkjn_tigera-operator(f05c8e2d-f785-4712-a1e5-8f5d640174db)\"" pod="tigera-operator/tigera-operator-7dcd859c48-rjkjn" podUID="f05c8e2d-f785-4712-a1e5-8f5d640174db" Nov 24 00:13:48.945258 containerd[1981]: time="2025-11-24T00:13:48.945080645Z" level=info msg="RemoveContainer for \"2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700\"" Nov 24 00:13:49.013483 containerd[1981]: time="2025-11-24T00:13:49.013440222Z" level=info msg="RemoveContainer for \"2b9aad76b9906315ac663cb372433dd13d2a0152b5eb3a3962c08586a32f0700\" returns successfully" Nov 24 00:13:49.417056 kubelet[3599]: E1124 00:13:49.416979 3599 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 00:13:49.567809 containerd[1981]: time="2025-11-24T00:13:49.567479993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 00:13:49.812733 containerd[1981]: time="2025-11-24T00:13:49.812588747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:49.814952 containerd[1981]: time="2025-11-24T00:13:49.814776440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 00:13:49.814952 containerd[1981]: time="2025-11-24T00:13:49.814951997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 00:13:49.815173 kubelet[3599]: E1124 00:13:49.815119 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:13:49.815173 kubelet[3599]: E1124 00:13:49.815159 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 00:13:49.815477 kubelet[3599]: E1124 00:13:49.815275 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:49.817496 containerd[1981]: time="2025-11-24T00:13:49.817460996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 00:13:50.055253 containerd[1981]: time="2025-11-24T00:13:50.055097549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:50.057385 containerd[1981]: time="2025-11-24T00:13:50.057261074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 00:13:50.057385 containerd[1981]: time="2025-11-24T00:13:50.057351356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 00:13:50.057633 kubelet[3599]: E1124 00:13:50.057521 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:13:50.057633 kubelet[3599]: E1124 00:13:50.057577 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 00:13:50.057828 kubelet[3599]: E1124 00:13:50.057741 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7fqqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l5ntz_calico-system(32cb229b-909c-49d5-aa91-1c2bceaac746): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:50.059036 kubelet[3599]: E1124 00:13:50.058990 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:13:50.160048 systemd[1]: cri-containerd-4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287.scope: Deactivated successfully. Nov 24 00:13:50.161000 systemd[1]: cri-containerd-4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287.scope: Consumed 5.364s CPU time, 88M memory peak, 64.1M read from disk. Nov 24 00:13:50.162250 containerd[1981]: time="2025-11-24T00:13:50.162216723Z" level=info msg="received container exit event container_id:\"4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287\" id:\"4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287\" pid:3147 exit_status:1 exited_at:{seconds:1763943230 nanos:161609121}" Nov 24 00:13:50.193166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287-rootfs.mount: Deactivated successfully. Nov 24 00:13:50.793492 kubelet[3599]: I1124 00:13:50.793456 3599 scope.go:117] "RemoveContainer" containerID="4cce6dab42590a09ae726ea07126f06dcbe525b5c1074a900e3b01e6a9ddc287" Nov 24 00:13:50.795775 containerd[1981]: time="2025-11-24T00:13:50.795730348Z" level=info msg="CreateContainer within sandbox \"bce437417f46bf67eb814b3fdfb394b0fd827b1cc98bc763b55043315e1ff761\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 24 00:13:50.872312 containerd[1981]: time="2025-11-24T00:13:50.870744939Z" level=info msg="Container 1e46d24a035afad9ed5dda60222b890c893e1ada5c3ca31b2551149e6fd776ab: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:13:50.873677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212542025.mount: Deactivated successfully. Nov 24 00:13:50.903162 containerd[1981]: time="2025-11-24T00:13:50.903109622Z" level=info msg="CreateContainer within sandbox \"bce437417f46bf67eb814b3fdfb394b0fd827b1cc98bc763b55043315e1ff761\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1e46d24a035afad9ed5dda60222b890c893e1ada5c3ca31b2551149e6fd776ab\"" Nov 24 00:13:50.904909 containerd[1981]: time="2025-11-24T00:13:50.903665119Z" level=info msg="StartContainer for \"1e46d24a035afad9ed5dda60222b890c893e1ada5c3ca31b2551149e6fd776ab\"" Nov 24 00:13:50.905121 containerd[1981]: time="2025-11-24T00:13:50.905007319Z" level=info msg="connecting to shim 1e46d24a035afad9ed5dda60222b890c893e1ada5c3ca31b2551149e6fd776ab" address="unix:///run/containerd/s/a14ac068c3d820e067dfc25f59df587967113c8fbf853f48a9da88de6f4c36aa" protocol=ttrpc version=3 Nov 24 00:13:50.934305 systemd[1]: Started cri-containerd-1e46d24a035afad9ed5dda60222b890c893e1ada5c3ca31b2551149e6fd776ab.scope - libcontainer container 1e46d24a035afad9ed5dda60222b890c893e1ada5c3ca31b2551149e6fd776ab. Nov 24 00:13:51.022982 containerd[1981]: time="2025-11-24T00:13:51.022928637Z" level=info msg="StartContainer for \"1e46d24a035afad9ed5dda60222b890c893e1ada5c3ca31b2551149e6fd776ab\" returns successfully" Nov 24 00:13:51.567921 kubelet[3599]: E1124 00:13:51.567780 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:13:52.573002 containerd[1981]: time="2025-11-24T00:13:52.572692088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 00:13:52.844155 containerd[1981]: time="2025-11-24T00:13:52.844017517Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:52.846280 containerd[1981]: time="2025-11-24T00:13:52.846221756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 00:13:52.846410 containerd[1981]: time="2025-11-24T00:13:52.846252746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 00:13:52.848037 kubelet[3599]: E1124 00:13:52.847980 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:13:52.849281 kubelet[3599]: E1124 00:13:52.848496 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 00:13:52.849381 containerd[1981]: time="2025-11-24T00:13:52.849040991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 00:13:52.849571 kubelet[3599]: E1124 00:13:52.848805 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-285p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9xvvr_calico-system(9441d7ab-9ca0-4aa4-8c69-0bae216edd81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:52.850938 kubelet[3599]: E1124 00:13:52.850905 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:13:53.095821 containerd[1981]: time="2025-11-24T00:13:53.095590298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:53.097815 containerd[1981]: time="2025-11-24T00:13:53.097662593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 00:13:53.097815 containerd[1981]: time="2025-11-24T00:13:53.097775252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 00:13:53.098045 kubelet[3599]: E1124 00:13:53.097992 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:13:53.098111 kubelet[3599]: E1124 00:13:53.098069 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 00:13:53.099035 kubelet[3599]: E1124 00:13:53.098972 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:7637e2ea3ae94bb89edf74c1cba02e3f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d64lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67965d874b-g8xwp_calico-system(036bfdfd-8582-4bd8-b46a-aee9f6d00cad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:53.101089 containerd[1981]: time="2025-11-24T00:13:53.101043719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 00:13:53.354743 containerd[1981]: time="2025-11-24T00:13:53.354283127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:53.356442 containerd[1981]: time="2025-11-24T00:13:53.356305384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 00:13:53.356442 containerd[1981]: time="2025-11-24T00:13:53.356412058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 00:13:53.356824 kubelet[3599]: E1124 00:13:53.356751 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:13:53.357014 kubelet[3599]: E1124 00:13:53.356986 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 00:13:53.357249 kubelet[3599]: E1124 00:13:53.357202 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d64lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67965d874b-g8xwp_calico-system(036bfdfd-8582-4bd8-b46a-aee9f6d00cad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:53.358626 kubelet[3599]: E1124 00:13:53.358567 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:13:55.351416 systemd[1]: cri-containerd-c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349.scope: Deactivated successfully. Nov 24 00:13:55.352726 systemd[1]: cri-containerd-c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349.scope: Consumed 2.971s CPU time, 39.8M memory peak, 34.2M read from disk. Nov 24 00:13:55.357592 containerd[1981]: time="2025-11-24T00:13:55.357459091Z" level=info msg="received container exit event container_id:\"c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349\" id:\"c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349\" pid:3177 exit_status:1 exited_at:{seconds:1763943235 nanos:356285228}" Nov 24 00:13:55.389367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349-rootfs.mount: Deactivated successfully. Nov 24 00:13:55.567712 containerd[1981]: time="2025-11-24T00:13:55.567664661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 00:13:55.804097 containerd[1981]: time="2025-11-24T00:13:55.804036293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 00:13:55.806954 containerd[1981]: time="2025-11-24T00:13:55.806666240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 00:13:55.806954 containerd[1981]: time="2025-11-24T00:13:55.806665370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 00:13:55.807552 kubelet[3599]: E1124 00:13:55.807271 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:13:55.807552 kubelet[3599]: E1124 00:13:55.807327 3599 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 00:13:55.807552 kubelet[3599]: E1124 00:13:55.807488 3599 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ck8tq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68bdc98bdb-v9btm_calico-apiserver(9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 00:13:55.809199 kubelet[3599]: E1124 00:13:55.808620 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-v9btm" podUID="9bb7a377-4ecd-4dcf-a90a-e0e0f9c65655" Nov 24 00:13:55.812707 kubelet[3599]: I1124 00:13:55.812662 3599 scope.go:117] "RemoveContainer" containerID="c8df5524228ed0d3a7219474bd164831009a6a9d432c22eec75ff02faa64d349" Nov 24 00:13:55.815267 containerd[1981]: time="2025-11-24T00:13:55.815229664Z" level=info msg="CreateContainer within sandbox \"d221ca9058ba693c935041e90a6539f1edf6372c9472b487d8691e8944e83fe9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 24 00:13:55.840388 containerd[1981]: time="2025-11-24T00:13:55.838489670Z" level=info msg="Container d05dd99942441264399f14844a77de0b169aeef6e158abb21b2aad5ba8650566: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:13:55.858980 containerd[1981]: time="2025-11-24T00:13:55.858942310Z" level=info msg="CreateContainer within sandbox \"d221ca9058ba693c935041e90a6539f1edf6372c9472b487d8691e8944e83fe9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d05dd99942441264399f14844a77de0b169aeef6e158abb21b2aad5ba8650566\"" Nov 24 00:13:55.861302 containerd[1981]: time="2025-11-24T00:13:55.859815047Z" level=info msg="StartContainer for \"d05dd99942441264399f14844a77de0b169aeef6e158abb21b2aad5ba8650566\"" Nov 24 00:13:55.861554 containerd[1981]: time="2025-11-24T00:13:55.861516835Z" level=info msg="connecting to shim d05dd99942441264399f14844a77de0b169aeef6e158abb21b2aad5ba8650566" address="unix:///run/containerd/s/f583446f36db2f3874765118543aee0c70448b9c8603bab4ac422b20c9355193" protocol=ttrpc version=3 Nov 24 00:13:55.888080 systemd[1]: Started cri-containerd-d05dd99942441264399f14844a77de0b169aeef6e158abb21b2aad5ba8650566.scope - libcontainer container d05dd99942441264399f14844a77de0b169aeef6e158abb21b2aad5ba8650566. Nov 24 00:13:55.946878 containerd[1981]: time="2025-11-24T00:13:55.946811950Z" level=info msg="StartContainer for \"d05dd99942441264399f14844a77de0b169aeef6e158abb21b2aad5ba8650566\" returns successfully" Nov 24 00:13:59.418034 kubelet[3599]: E1124 00:13:59.417797 3599 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 00:14:00.568335 kubelet[3599]: I1124 00:14:00.566959 3599 scope.go:117] "RemoveContainer" containerID="ebcb8b9d653bd62f79861615efeafeb95b9fe32a223f721cc39db719e527fe07" Nov 24 00:14:00.569525 containerd[1981]: time="2025-11-24T00:14:00.569487218Z" level=info msg="CreateContainer within sandbox \"f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Nov 24 00:14:00.586878 containerd[1981]: time="2025-11-24T00:14:00.586815042Z" level=info msg="Container e60a2644c6f9a2dff2a1120474e005a7b594c350550c617a6d9dcce9df82793a: CDI devices from CRI Config.CDIDevices: []" Nov 24 00:14:00.600860 containerd[1981]: time="2025-11-24T00:14:00.600773232Z" level=info msg="CreateContainer within sandbox \"f255b6e32ed501893999a375463d91f6cf10c8f41b81e3d7e1775ec177072809\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"e60a2644c6f9a2dff2a1120474e005a7b594c350550c617a6d9dcce9df82793a\"" Nov 24 00:14:00.601929 containerd[1981]: time="2025-11-24T00:14:00.601476181Z" level=info msg="StartContainer for \"e60a2644c6f9a2dff2a1120474e005a7b594c350550c617a6d9dcce9df82793a\"" Nov 24 00:14:00.602593 containerd[1981]: time="2025-11-24T00:14:00.602555259Z" level=info msg="connecting to shim e60a2644c6f9a2dff2a1120474e005a7b594c350550c617a6d9dcce9df82793a" address="unix:///run/containerd/s/e40270d178c3350a13e3c3cb6415180a02bceecada0ab2973010ae677c5af572" protocol=ttrpc version=3 Nov 24 00:14:00.631296 systemd[1]: Started cri-containerd-e60a2644c6f9a2dff2a1120474e005a7b594c350550c617a6d9dcce9df82793a.scope - libcontainer container e60a2644c6f9a2dff2a1120474e005a7b594c350550c617a6d9dcce9df82793a. Nov 24 00:14:00.687922 containerd[1981]: time="2025-11-24T00:14:00.687869153Z" level=info msg="StartContainer for \"e60a2644c6f9a2dff2a1120474e005a7b594c350550c617a6d9dcce9df82793a\" returns successfully" Nov 24 00:14:01.567870 kubelet[3599]: E1124 00:14:01.567788 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-9bb64f948-hbf2v" podUID="02dedcc0-cbf6-46e5-bf8e-d29b3313eb81" Nov 24 00:14:01.568494 kubelet[3599]: E1124 00:14:01.568453 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l5ntz" podUID="32cb229b-909c-49d5-aa91-1c2bceaac746" Nov 24 00:14:04.569226 kubelet[3599]: E1124 00:14:04.569101 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68bdc98bdb-jnjxv" podUID="abebab1e-f092-4a6b-94e1-1c92a233e08a" Nov 24 00:14:04.572013 kubelet[3599]: E1124 00:14:04.570839 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67965d874b-g8xwp" podUID="036bfdfd-8582-4bd8-b46a-aee9f6d00cad" Nov 24 00:14:07.567804 kubelet[3599]: E1124 00:14:07.567689 3599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9xvvr" podUID="9441d7ab-9ca0-4aa4-8c69-0bae216edd81" Nov 24 00:14:09.418671 kubelet[3599]: E1124 00:14:09.418408 3599 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"