Dec 12 17:28:14.152150 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 12 17:28:14.152196 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:28:14.152222 kernel: KASLR disabled due to lack of seed Dec 12 17:28:14.152239 kernel: efi: EFI v2.7 by EDK II Dec 12 17:28:14.152257 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Dec 12 17:28:14.152272 kernel: secureboot: Secure boot disabled Dec 12 17:28:14.152290 kernel: ACPI: Early table checksum verification disabled Dec 12 17:28:14.152306 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 12 17:28:14.152322 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 12 17:28:14.152337 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 12 17:28:14.152353 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 12 17:28:14.152373 kernel: ACPI: FACS 0x0000000078630000 000040 Dec 12 17:28:14.152425 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 12 17:28:14.152476 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 12 17:28:14.152498 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 12 17:28:14.152515 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 12 17:28:14.152541 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 12 17:28:14.152559 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 12 17:28:14.152576 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 12 17:28:14.152593 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 12 17:28:14.152609 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 12 17:28:14.152626 kernel: printk: legacy bootconsole [uart0] enabled Dec 12 17:28:14.152643 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:28:14.152660 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 12 17:28:14.152679 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Dec 12 17:28:14.152695 kernel: Zone ranges: Dec 12 17:28:14.152711 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 12 17:28:14.152733 kernel: DMA32 empty Dec 12 17:28:14.152750 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 12 17:28:14.152766 kernel: Device empty Dec 12 17:28:14.152783 kernel: Movable zone start for each node Dec 12 17:28:14.152799 kernel: Early memory node ranges Dec 12 17:28:14.152817 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 12 17:28:14.152833 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 12 17:28:14.152850 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 12 17:28:14.152867 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 12 17:28:14.152884 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 12 17:28:14.152900 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 12 17:28:14.152917 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 12 17:28:14.152939 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 12 17:28:14.152963 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 12 17:28:14.152980 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 12 17:28:14.152997 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Dec 12 17:28:14.153015 kernel: psci: probing for conduit method from ACPI. Dec 12 17:28:14.153038 kernel: psci: PSCIv1.0 detected in firmware. Dec 12 17:28:14.153055 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:28:14.153072 kernel: psci: Trusted OS migration not required Dec 12 17:28:14.153089 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:28:14.153106 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Dec 12 17:28:14.153123 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:28:14.153140 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:28:14.153158 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 12 17:28:14.153174 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:28:14.153191 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:28:14.153209 kernel: CPU features: detected: Spectre-v2 Dec 12 17:28:14.153232 kernel: CPU features: detected: Spectre-v3a Dec 12 17:28:14.153249 kernel: CPU features: detected: Spectre-BHB Dec 12 17:28:14.153266 kernel: CPU features: detected: ARM erratum 1742098 Dec 12 17:28:14.153283 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 12 17:28:14.153300 kernel: alternatives: applying boot alternatives Dec 12 17:28:14.153320 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:28:14.153338 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:28:14.153356 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:28:14.153375 kernel: Fallback order for Node 0: 0 Dec 12 17:28:14.154475 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Dec 12 17:28:14.154506 kernel: Policy zone: Normal Dec 12 17:28:14.154535 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:28:14.154552 kernel: software IO TLB: area num 2. Dec 12 17:28:14.154571 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Dec 12 17:28:14.154588 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 17:28:14.154606 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:28:14.154624 kernel: rcu: RCU event tracing is enabled. Dec 12 17:28:14.154641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 17:28:14.154659 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:28:14.154676 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:28:14.154693 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:28:14.154710 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 17:28:14.154732 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:28:14.154750 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:28:14.154767 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:28:14.154784 kernel: GICv3: 96 SPIs implemented Dec 12 17:28:14.154818 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:28:14.154841 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:28:14.154859 kernel: GICv3: GICv3 features: 16 PPIs Dec 12 17:28:14.154876 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:28:14.154893 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 12 17:28:14.154910 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 12 17:28:14.154928 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:28:14.154947 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:28:14.154970 kernel: GICv3: using LPI property table @0x0000000400110000 Dec 12 17:28:14.154987 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 12 17:28:14.155004 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Dec 12 17:28:14.155021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:28:14.155037 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 12 17:28:14.155054 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 12 17:28:14.155071 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 12 17:28:14.155088 kernel: Console: colour dummy device 80x25 Dec 12 17:28:14.155106 kernel: printk: legacy console [tty1] enabled Dec 12 17:28:14.155124 kernel: ACPI: Core revision 20240827 Dec 12 17:28:14.155141 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 12 17:28:14.155164 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:28:14.155181 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:28:14.155198 kernel: landlock: Up and running. Dec 12 17:28:14.155216 kernel: SELinux: Initializing. Dec 12 17:28:14.155235 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:28:14.155253 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:28:14.155271 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:28:14.155290 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:28:14.155313 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:28:14.155333 kernel: Remapping and enabling EFI services. Dec 12 17:28:14.155352 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:28:14.155371 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:28:14.155434 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 12 17:28:14.155458 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Dec 12 17:28:14.155477 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 12 17:28:14.155495 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 17:28:14.155513 kernel: SMP: Total of 2 processors activated. Dec 12 17:28:14.155540 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:28:14.155568 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:28:14.155587 kernel: CPU features: detected: 32-bit EL1 Support Dec 12 17:28:14.155610 kernel: CPU features: detected: CRC32 instructions Dec 12 17:28:14.155628 kernel: alternatives: applying system-wide alternatives Dec 12 17:28:14.155647 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Dec 12 17:28:14.155666 kernel: devtmpfs: initialized Dec 12 17:28:14.155685 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:28:14.155709 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 17:28:14.155728 kernel: 16880 pages in range for non-PLT usage Dec 12 17:28:14.155747 kernel: 508400 pages in range for PLT usage Dec 12 17:28:14.155766 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:28:14.155784 kernel: SMBIOS 3.0.0 present. Dec 12 17:28:14.155802 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 12 17:28:14.155820 kernel: DMI: Memory slots populated: 0/0 Dec 12 17:28:14.155839 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:28:14.155857 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:28:14.155880 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:28:14.155899 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:28:14.155917 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:28:14.155934 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Dec 12 17:28:14.155952 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:28:14.155970 kernel: cpuidle: using governor menu Dec 12 17:28:14.155987 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:28:14.156005 kernel: ASID allocator initialised with 65536 entries Dec 12 17:28:14.156023 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:28:14.156044 kernel: Serial: AMBA PL011 UART driver Dec 12 17:28:14.156062 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:28:14.156080 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:28:14.156098 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:28:14.156115 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:28:14.156133 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:28:14.156151 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:28:14.156169 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:28:14.156186 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:28:14.156209 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:28:14.156227 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:28:14.156246 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:28:14.156264 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:28:14.156282 kernel: ACPI: Interpreter enabled Dec 12 17:28:14.156300 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:28:14.156318 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:28:14.156335 kernel: ACPI: CPU0 has been hot-added Dec 12 17:28:14.156353 kernel: ACPI: CPU1 has been hot-added Dec 12 17:28:14.156375 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Dec 12 17:28:14.157786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:28:14.157998 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:28:14.158204 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:28:14.160514 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Dec 12 17:28:14.160773 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Dec 12 17:28:14.160801 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 12 17:28:14.160831 kernel: acpiphp: Slot [1] registered Dec 12 17:28:14.160850 kernel: acpiphp: Slot [2] registered Dec 12 17:28:14.160868 kernel: acpiphp: Slot [3] registered Dec 12 17:28:14.160887 kernel: acpiphp: Slot [4] registered Dec 12 17:28:14.160905 kernel: acpiphp: Slot [5] registered Dec 12 17:28:14.160923 kernel: acpiphp: Slot [6] registered Dec 12 17:28:14.160941 kernel: acpiphp: Slot [7] registered Dec 12 17:28:14.160959 kernel: acpiphp: Slot [8] registered Dec 12 17:28:14.160977 kernel: acpiphp: Slot [9] registered Dec 12 17:28:14.160994 kernel: acpiphp: Slot [10] registered Dec 12 17:28:14.161018 kernel: acpiphp: Slot [11] registered Dec 12 17:28:14.161035 kernel: acpiphp: Slot [12] registered Dec 12 17:28:14.161053 kernel: acpiphp: Slot [13] registered Dec 12 17:28:14.161071 kernel: acpiphp: Slot [14] registered Dec 12 17:28:14.161088 kernel: acpiphp: Slot [15] registered Dec 12 17:28:14.161106 kernel: acpiphp: Slot [16] registered Dec 12 17:28:14.161124 kernel: acpiphp: Slot [17] registered Dec 12 17:28:14.161141 kernel: acpiphp: Slot [18] registered Dec 12 17:28:14.161159 kernel: acpiphp: Slot [19] registered Dec 12 17:28:14.161181 kernel: acpiphp: Slot [20] registered Dec 12 17:28:14.161199 kernel: acpiphp: Slot [21] registered Dec 12 17:28:14.161218 kernel: acpiphp: Slot [22] registered Dec 12 17:28:14.161236 kernel: acpiphp: Slot [23] registered Dec 12 17:28:14.161255 kernel: acpiphp: Slot [24] registered Dec 12 17:28:14.161273 kernel: acpiphp: Slot [25] registered Dec 12 17:28:14.161291 kernel: acpiphp: Slot [26] registered Dec 12 17:28:14.161309 kernel: acpiphp: Slot [27] registered Dec 12 17:28:14.161326 kernel: acpiphp: Slot [28] registered Dec 12 17:28:14.161344 kernel: acpiphp: Slot [29] registered Dec 12 17:28:14.161366 kernel: acpiphp: Slot [30] registered Dec 12 17:28:14.161432 kernel: acpiphp: Slot [31] registered Dec 12 17:28:14.161459 kernel: PCI host bridge to bus 0000:00 Dec 12 17:28:14.161700 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 12 17:28:14.161877 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:28:14.162048 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 12 17:28:14.162222 kernel: pci_bus 0000:00: root bus resource [bus 00] Dec 12 17:28:14.162608 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:28:14.162872 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Dec 12 17:28:14.163074 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Dec 12 17:28:14.163284 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Dec 12 17:28:14.163522 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Dec 12 17:28:14.163716 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 12 17:28:14.163936 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Dec 12 17:28:14.164129 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Dec 12 17:28:14.164322 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Dec 12 17:28:14.165617 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Dec 12 17:28:14.165838 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 12 17:28:14.166024 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 12 17:28:14.166199 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:28:14.166433 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 12 17:28:14.166463 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:28:14.166483 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:28:14.166501 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:28:14.166520 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:28:14.166538 kernel: iommu: Default domain type: Translated Dec 12 17:28:14.166556 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:28:14.166573 kernel: efivars: Registered efivars operations Dec 12 17:28:14.166592 kernel: vgaarb: loaded Dec 12 17:28:14.166616 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:28:14.166634 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:28:14.166652 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:28:14.166670 kernel: pnp: PnP ACPI init Dec 12 17:28:14.166905 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 12 17:28:14.166934 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:28:14.166953 kernel: NET: Registered PF_INET protocol family Dec 12 17:28:14.166971 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:28:14.166995 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:28:14.167014 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:28:14.167031 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:28:14.167050 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:28:14.167067 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:28:14.167085 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:28:14.167103 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:28:14.167121 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:28:14.167139 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:28:14.167161 kernel: kvm [1]: HYP mode not available Dec 12 17:28:14.167179 kernel: Initialise system trusted keyrings Dec 12 17:28:14.167196 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:28:14.167214 kernel: Key type asymmetric registered Dec 12 17:28:14.167233 kernel: Asymmetric key parser 'x509' registered Dec 12 17:28:14.167250 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:28:14.167268 kernel: io scheduler mq-deadline registered Dec 12 17:28:14.167287 kernel: io scheduler kyber registered Dec 12 17:28:14.167305 kernel: io scheduler bfq registered Dec 12 17:28:14.169650 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 12 17:28:14.169700 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:28:14.169721 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:28:14.169740 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 12 17:28:14.169759 kernel: ACPI: button: Sleep Button [SLPB] Dec 12 17:28:14.169777 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:28:14.169796 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 12 17:28:14.170004 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 12 17:28:14.170039 kernel: printk: legacy console [ttyS0] disabled Dec 12 17:28:14.170059 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 12 17:28:14.170077 kernel: printk: legacy console [ttyS0] enabled Dec 12 17:28:14.170095 kernel: printk: legacy bootconsole [uart0] disabled Dec 12 17:28:14.170113 kernel: thunder_xcv, ver 1.0 Dec 12 17:28:14.170130 kernel: thunder_bgx, ver 1.0 Dec 12 17:28:14.170148 kernel: nicpf, ver 1.0 Dec 12 17:28:14.170166 kernel: nicvf, ver 1.0 Dec 12 17:28:14.170368 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:28:14.170609 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:28:13 UTC (1765560493) Dec 12 17:28:14.170637 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:28:14.170656 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Dec 12 17:28:14.170674 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:28:14.170693 kernel: watchdog: NMI not fully supported Dec 12 17:28:14.170711 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:28:14.170729 kernel: Segment Routing with IPv6 Dec 12 17:28:14.170747 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:28:14.170765 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:28:14.170789 kernel: Key type dns_resolver registered Dec 12 17:28:14.170828 kernel: registered taskstats version 1 Dec 12 17:28:14.170850 kernel: Loading compiled-in X.509 certificates Dec 12 17:28:14.170869 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:28:14.170887 kernel: Demotion targets for Node 0: null Dec 12 17:28:14.170905 kernel: Key type .fscrypt registered Dec 12 17:28:14.170922 kernel: Key type fscrypt-provisioning registered Dec 12 17:28:14.170939 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:28:14.170957 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:28:14.170981 kernel: ima: No architecture policies found Dec 12 17:28:14.170999 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:28:14.171017 kernel: clk: Disabling unused clocks Dec 12 17:28:14.171035 kernel: PM: genpd: Disabling unused power domains Dec 12 17:28:14.171053 kernel: Warning: unable to open an initial console. Dec 12 17:28:14.171071 kernel: Freeing unused kernel memory: 39552K Dec 12 17:28:14.171089 kernel: Run /init as init process Dec 12 17:28:14.171108 kernel: with arguments: Dec 12 17:28:14.171125 kernel: /init Dec 12 17:28:14.171147 kernel: with environment: Dec 12 17:28:14.171164 kernel: HOME=/ Dec 12 17:28:14.171182 kernel: TERM=linux Dec 12 17:28:14.171203 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:28:14.171228 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:28:14.171249 systemd[1]: Detected virtualization amazon. Dec 12 17:28:14.171268 systemd[1]: Detected architecture arm64. Dec 12 17:28:14.171291 systemd[1]: Running in initrd. Dec 12 17:28:14.171337 systemd[1]: No hostname configured, using default hostname. Dec 12 17:28:14.173119 systemd[1]: Hostname set to . Dec 12 17:28:14.173173 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:28:14.173193 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:28:14.173213 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:28:14.173233 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:28:14.173254 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:28:14.173285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:28:14.173305 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:28:14.173326 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:28:14.173348 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:28:14.173367 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:28:14.173434 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:28:14.173458 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:28:14.173483 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:28:14.173503 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:28:14.173522 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:28:14.173541 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:28:14.173560 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:28:14.173579 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:28:14.173599 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:28:14.173618 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:28:14.173637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:28:14.173661 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:28:14.173680 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:28:14.173700 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:28:14.173719 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:28:14.173738 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:28:14.173757 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:28:14.173777 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:28:14.173796 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:28:14.173819 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:28:14.173839 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:28:14.173858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:28:14.173878 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:28:14.173898 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:28:14.173924 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:28:14.173944 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:28:14.174020 systemd-journald[256]: Collecting audit messages is disabled. Dec 12 17:28:14.174065 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:28:14.174091 kernel: Bridge firewalling registered Dec 12 17:28:14.174122 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:28:14.174163 systemd-journald[256]: Journal started Dec 12 17:28:14.174204 systemd-journald[256]: Runtime Journal (/run/log/journal/ec2a31d0b75c8565cac4cbbe6dbf7c82) is 8M, max 75.3M, 67.3M free. Dec 12 17:28:14.128181 systemd-modules-load[259]: Inserted module 'overlay' Dec 12 17:28:14.175939 systemd-modules-load[259]: Inserted module 'br_netfilter' Dec 12 17:28:14.197534 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:28:14.186327 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:28:14.187223 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:14.194606 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:28:14.201907 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:28:14.219789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:28:14.229620 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:28:14.247100 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:28:14.272255 systemd-tmpfiles[277]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:28:14.279023 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:28:14.289045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:28:14.297468 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:28:14.303190 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:28:14.325792 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:28:14.350421 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:28:14.435619 systemd-resolved[299]: Positive Trust Anchors: Dec 12 17:28:14.435650 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:28:14.435708 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:28:14.516430 kernel: SCSI subsystem initialized Dec 12 17:28:14.525428 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:28:14.538433 kernel: iscsi: registered transport (tcp) Dec 12 17:28:14.561437 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:28:14.561524 kernel: QLogic iSCSI HBA Driver Dec 12 17:28:14.598629 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:28:14.633509 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:28:14.645716 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:28:14.706424 kernel: random: crng init done Dec 12 17:28:14.706899 systemd-resolved[299]: Defaulting to hostname 'linux'. Dec 12 17:28:14.713223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:28:14.716023 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:28:14.746466 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:28:14.751688 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:28:14.850470 kernel: raid6: neonx8 gen() 6470 MB/s Dec 12 17:28:14.867458 kernel: raid6: neonx4 gen() 6455 MB/s Dec 12 17:28:14.884453 kernel: raid6: neonx2 gen() 5374 MB/s Dec 12 17:28:14.901451 kernel: raid6: neonx1 gen() 3905 MB/s Dec 12 17:28:14.918454 kernel: raid6: int64x8 gen() 3625 MB/s Dec 12 17:28:14.935452 kernel: raid6: int64x4 gen() 3670 MB/s Dec 12 17:28:14.952462 kernel: raid6: int64x2 gen() 3568 MB/s Dec 12 17:28:14.970593 kernel: raid6: int64x1 gen() 2718 MB/s Dec 12 17:28:14.970691 kernel: raid6: using algorithm neonx8 gen() 6470 MB/s Dec 12 17:28:14.989563 kernel: raid6: .... xor() 4675 MB/s, rmw enabled Dec 12 17:28:14.989651 kernel: raid6: using neon recovery algorithm Dec 12 17:28:14.999341 kernel: xor: measuring software checksum speed Dec 12 17:28:14.999444 kernel: 8regs : 12976 MB/sec Dec 12 17:28:15.002292 kernel: 32regs : 12264 MB/sec Dec 12 17:28:15.002375 kernel: arm64_neon : 9065 MB/sec Dec 12 17:28:15.002425 kernel: xor: using function: 8regs (12976 MB/sec) Dec 12 17:28:15.099467 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:28:15.112240 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:28:15.123308 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:28:15.180977 systemd-udevd[507]: Using default interface naming scheme 'v255'. Dec 12 17:28:15.193710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:28:15.202653 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:28:15.248074 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Dec 12 17:28:15.299563 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:28:15.311767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:28:15.448572 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:28:15.459148 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:28:15.623023 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:28:15.623107 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 12 17:28:15.634459 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 12 17:28:15.638588 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 12 17:28:15.641368 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 12 17:28:15.641704 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 12 17:28:15.649449 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 12 17:28:15.661108 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:28:15.661179 kernel: GPT:9289727 != 33554431 Dec 12 17:28:15.662951 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:28:15.664566 kernel: GPT:9289727 != 33554431 Dec 12 17:28:15.666552 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:28:15.668104 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 17:28:15.702249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:28:15.705368 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:de:5e:75:a2:bb Dec 12 17:28:15.703295 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:15.711877 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:28:15.715759 (udev-worker)[576]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:28:15.724010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:28:15.727329 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:28:15.779092 kernel: nvme nvme0: using unchecked data buffer Dec 12 17:28:15.781114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:15.936275 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 12 17:28:15.971096 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 12 17:28:15.992488 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:28:16.020480 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 17:28:16.059255 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 12 17:28:16.062962 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 12 17:28:16.068606 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:28:16.071307 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:28:16.074294 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:28:16.088608 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:28:16.096623 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:28:16.115463 disk-uuid[686]: Primary Header is updated. Dec 12 17:28:16.115463 disk-uuid[686]: Secondary Entries is updated. Dec 12 17:28:16.115463 disk-uuid[686]: Secondary Header is updated. Dec 12 17:28:16.132536 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 17:28:16.137637 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:28:17.157477 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 12 17:28:17.159145 disk-uuid[687]: The operation has completed successfully. Dec 12 17:28:17.378743 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:28:17.379530 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:28:17.473830 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:28:17.513601 sh[955]: Success Dec 12 17:28:17.548374 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:28:17.548514 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:28:17.548549 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:28:17.563459 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:28:17.670971 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:28:17.677986 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:28:17.700262 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:28:17.727466 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (978) Dec 12 17:28:17.732425 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:28:17.732544 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:28:17.846547 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 12 17:28:17.846629 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:28:17.846655 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:28:17.880028 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:28:17.889971 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:28:17.893943 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:28:17.895628 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:28:17.911176 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:28:17.960452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1001) Dec 12 17:28:17.966443 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:17.966555 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:28:17.975069 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 17:28:17.975165 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 17:28:17.985613 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:17.987504 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:28:17.995421 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:28:18.131445 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:28:18.140478 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:28:18.222190 systemd-networkd[1148]: lo: Link UP Dec 12 17:28:18.222219 systemd-networkd[1148]: lo: Gained carrier Dec 12 17:28:18.225776 systemd-networkd[1148]: Enumeration completed Dec 12 17:28:18.226718 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:28:18.226726 systemd-networkd[1148]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:28:18.227362 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:28:18.230834 systemd[1]: Reached target network.target - Network. Dec 12 17:28:18.243192 systemd-networkd[1148]: eth0: Link UP Dec 12 17:28:18.243204 systemd-networkd[1148]: eth0: Gained carrier Dec 12 17:28:18.243537 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:28:18.272527 systemd-networkd[1148]: eth0: DHCPv4 address 172.31.24.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 17:28:18.585846 ignition[1052]: Ignition 2.22.0 Dec 12 17:28:18.585882 ignition[1052]: Stage: fetch-offline Dec 12 17:28:18.586933 ignition[1052]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:18.586962 ignition[1052]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:28:18.588256 ignition[1052]: Ignition finished successfully Dec 12 17:28:18.595661 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:28:18.608758 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 17:28:18.668177 ignition[1158]: Ignition 2.22.0 Dec 12 17:28:18.668214 ignition[1158]: Stage: fetch Dec 12 17:28:18.670269 ignition[1158]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:18.670311 ignition[1158]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:28:18.670550 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:28:18.694070 ignition[1158]: PUT result: OK Dec 12 17:28:18.700780 ignition[1158]: parsed url from cmdline: "" Dec 12 17:28:18.700958 ignition[1158]: no config URL provided Dec 12 17:28:18.700984 ignition[1158]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:28:18.701230 ignition[1158]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:28:18.701325 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:28:18.711793 ignition[1158]: PUT result: OK Dec 12 17:28:18.711966 ignition[1158]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 12 17:28:18.717224 ignition[1158]: GET result: OK Dec 12 17:28:18.717719 ignition[1158]: parsing config with SHA512: e88f6bc628d5e405598100785f7085d29e34845b1bbf5a22de9b0953af4cd991f539fe86384a197817c96293af042825b0bd1e36cf6dcefd0c5bd428cab0f59d Dec 12 17:28:18.726944 unknown[1158]: fetched base config from "system" Dec 12 17:28:18.727981 ignition[1158]: fetch: fetch complete Dec 12 17:28:18.726981 unknown[1158]: fetched base config from "system" Dec 12 17:28:18.727996 ignition[1158]: fetch: fetch passed Dec 12 17:28:18.726997 unknown[1158]: fetched user config from "aws" Dec 12 17:28:18.728114 ignition[1158]: Ignition finished successfully Dec 12 17:28:18.742675 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 17:28:18.744375 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:28:18.812048 ignition[1165]: Ignition 2.22.0 Dec 12 17:28:18.812083 ignition[1165]: Stage: kargs Dec 12 17:28:18.812649 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:18.812674 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:28:18.813017 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:28:18.815082 ignition[1165]: PUT result: OK Dec 12 17:28:18.831438 ignition[1165]: kargs: kargs passed Dec 12 17:28:18.833227 ignition[1165]: Ignition finished successfully Dec 12 17:28:18.839469 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:28:18.846033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:28:18.912965 ignition[1172]: Ignition 2.22.0 Dec 12 17:28:18.914278 ignition[1172]: Stage: disks Dec 12 17:28:18.914903 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:18.914927 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:28:18.915072 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:28:18.921495 ignition[1172]: PUT result: OK Dec 12 17:28:18.928747 ignition[1172]: disks: disks passed Dec 12 17:28:18.928850 ignition[1172]: Ignition finished successfully Dec 12 17:28:18.931946 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:28:18.939431 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:28:18.942863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:28:18.947595 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:28:18.952419 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:28:18.956729 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:28:18.964331 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:28:19.018101 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:28:19.025125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:28:19.030972 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:28:19.169419 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:28:19.170977 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:28:19.175265 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:28:19.181635 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:28:19.189682 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:28:19.196170 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:28:19.196989 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:28:19.197040 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:28:19.226233 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:28:19.233667 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:28:19.249447 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Dec 12 17:28:19.254610 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:19.254691 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:28:19.262838 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 17:28:19.262914 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 17:28:19.265653 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:28:19.583725 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:28:19.605204 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:28:19.624009 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:28:19.632959 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:28:19.916471 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:28:19.923368 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:28:19.927146 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:28:19.933122 systemd-networkd[1148]: eth0: Gained IPv6LL Dec 12 17:28:19.964935 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:28:19.968053 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:19.998682 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:28:20.026362 ignition[1313]: INFO : Ignition 2.22.0 Dec 12 17:28:20.028583 ignition[1313]: INFO : Stage: mount Dec 12 17:28:20.028583 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:20.033089 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:28:20.033089 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:28:20.039323 ignition[1313]: INFO : PUT result: OK Dec 12 17:28:20.045042 ignition[1313]: INFO : mount: mount passed Dec 12 17:28:20.047566 ignition[1313]: INFO : Ignition finished successfully Dec 12 17:28:20.050117 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:28:20.057601 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:28:20.175133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:28:20.220450 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1324) Dec 12 17:28:20.225564 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:28:20.225651 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:28:20.234585 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 12 17:28:20.234717 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 12 17:28:20.238627 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:28:20.304651 ignition[1341]: INFO : Ignition 2.22.0 Dec 12 17:28:20.306919 ignition[1341]: INFO : Stage: files Dec 12 17:28:20.306919 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:20.306919 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:28:20.306919 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:28:20.325652 ignition[1341]: INFO : PUT result: OK Dec 12 17:28:20.334600 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:28:20.338965 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:28:20.338965 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:28:20.352271 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:28:20.356540 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:28:20.360894 unknown[1341]: wrote ssh authorized keys file for user: core Dec 12 17:28:20.363601 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:28:20.376202 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:28:20.380630 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:28:20.495734 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:28:20.642431 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:28:20.642431 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:28:20.642431 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:28:20.642431 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:28:20.659008 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:28:20.659008 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:28:20.659008 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:28:20.659008 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:28:20.659008 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:28:20.681979 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:28:20.686612 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:28:20.686612 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:28:20.700344 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:28:20.707964 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:28:20.707964 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 12 17:28:21.151643 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:28:21.553490 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:28:21.553490 ignition[1341]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:28:21.561805 ignition[1341]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:28:21.570802 ignition[1341]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:28:21.570802 ignition[1341]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:28:21.570802 ignition[1341]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:28:21.570802 ignition[1341]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:28:21.570802 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:28:21.570802 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:28:21.570802 ignition[1341]: INFO : files: files passed Dec 12 17:28:21.570802 ignition[1341]: INFO : Ignition finished successfully Dec 12 17:28:21.599494 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:28:21.607311 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:28:21.613856 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:28:21.637787 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:28:21.638255 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:28:21.659257 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:28:21.659257 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:28:21.668816 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:28:21.674467 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:28:21.680877 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:28:21.687183 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:28:21.770882 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:28:21.771932 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:28:21.781222 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:28:21.787585 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:28:21.792937 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:28:21.794358 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:28:21.839503 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:28:21.847161 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:28:21.891453 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:28:21.897846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:28:21.904613 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:28:21.908036 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:28:21.908365 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:28:21.918063 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:28:21.918501 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:28:21.928346 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:28:21.937087 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:28:21.940431 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:28:21.949075 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:28:21.953056 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:28:21.958420 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:28:21.966067 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:28:21.969449 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:28:21.974642 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:28:21.978682 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:28:21.979094 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:28:21.988422 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:28:21.991730 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:28:21.999632 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:28:22.003560 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:28:22.006775 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:28:22.007051 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:28:22.018047 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:28:22.020838 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:28:22.026867 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:28:22.027124 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:28:22.035531 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:28:22.039873 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:28:22.040151 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:28:22.063655 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:28:22.071799 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:28:22.072588 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:28:22.083807 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:28:22.084051 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:28:22.102601 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:28:22.103109 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:28:22.134645 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:28:22.140778 ignition[1395]: INFO : Ignition 2.22.0 Dec 12 17:28:22.140778 ignition[1395]: INFO : Stage: umount Dec 12 17:28:22.148449 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:28:22.148449 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 12 17:28:22.149602 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:28:22.161107 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 12 17:28:22.161107 ignition[1395]: INFO : PUT result: OK Dec 12 17:28:22.149918 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:28:22.170339 ignition[1395]: INFO : umount: umount passed Dec 12 17:28:22.172428 ignition[1395]: INFO : Ignition finished successfully Dec 12 17:28:22.178002 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:28:22.178571 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:28:22.185404 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:28:22.185578 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:28:22.188233 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:28:22.188336 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:28:22.191078 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 17:28:22.191175 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 17:28:22.198843 systemd[1]: Stopped target network.target - Network. Dec 12 17:28:22.203213 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:28:22.203349 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:28:22.211629 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:28:22.214865 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:28:22.216278 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:28:22.220440 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:28:22.226216 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:28:22.229217 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:28:22.229318 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:28:22.233883 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:28:22.233969 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:28:22.237283 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:28:22.237872 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:28:22.244997 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:28:22.245101 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:28:22.251563 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:28:22.251701 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:28:22.256476 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:28:22.262971 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:28:22.304259 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:28:22.305660 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:28:22.319855 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:28:22.320693 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:28:22.321329 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:28:22.331965 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:28:22.333909 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:28:22.341475 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:28:22.342165 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:28:22.352532 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:28:22.367412 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:28:22.367570 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:28:22.372189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:28:22.372320 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:28:22.378172 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:28:22.378281 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:28:22.387598 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:28:22.387714 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:28:22.395921 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:28:22.408692 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:28:22.408881 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:28:22.433166 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:28:22.436174 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:28:22.443944 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:28:22.444101 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:28:22.450298 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:28:22.450414 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:28:22.457430 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:28:22.457726 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:28:22.466092 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:28:22.466212 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:28:22.480549 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:28:22.481059 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:28:22.493093 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:28:22.497118 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:28:22.497259 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:28:22.511178 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:28:22.511306 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:28:22.518875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:28:22.518995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:22.532510 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 17:28:22.532727 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 17:28:22.532826 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:28:22.535554 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:28:22.536934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:28:22.556087 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:28:22.556584 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:28:22.567089 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:28:22.574467 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:28:22.606888 systemd[1]: Switching root. Dec 12 17:28:22.661204 systemd-journald[256]: Journal stopped Dec 12 17:28:25.254811 systemd-journald[256]: Received SIGTERM from PID 1 (systemd). Dec 12 17:28:25.254961 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:28:25.255010 kernel: SELinux: policy capability open_perms=1 Dec 12 17:28:25.255045 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:28:25.255077 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:28:25.255109 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:28:25.255141 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:28:25.255181 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:28:25.255211 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:28:25.255243 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:28:25.255275 kernel: audit: type=1403 audit(1765560503.175:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:28:25.255321 systemd[1]: Successfully loaded SELinux policy in 106.132ms. Dec 12 17:28:25.255380 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.822ms. Dec 12 17:28:25.255472 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:28:25.255510 systemd[1]: Detected virtualization amazon. Dec 12 17:28:25.255548 systemd[1]: Detected architecture arm64. Dec 12 17:28:25.255583 systemd[1]: Detected first boot. Dec 12 17:28:25.255624 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:28:25.255658 zram_generator::config[1441]: No configuration found. Dec 12 17:28:25.255701 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:28:25.255732 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:28:25.255765 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:28:25.255796 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:28:25.255827 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:28:25.255868 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:28:25.255900 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:28:25.255935 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:28:25.255971 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:28:25.256003 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:28:25.256034 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:28:25.256065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:28:25.256093 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:28:25.256128 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:28:25.256156 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:28:25.256188 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:28:25.256217 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:28:25.256248 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:28:25.256277 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:28:25.256306 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:28:25.256337 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 12 17:28:25.256366 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:28:25.257491 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:28:25.257551 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:28:25.257582 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:28:25.257614 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:28:25.257644 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:28:25.257673 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:28:25.257710 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:28:25.257740 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:28:25.257786 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:28:25.257821 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:28:25.257854 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:28:25.257886 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:28:25.257915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:28:25.257949 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:28:25.257983 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:28:25.258013 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:28:25.258046 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:28:25.258083 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:28:25.258114 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:28:25.258146 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:28:25.258181 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:28:25.258216 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:28:25.258252 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:28:25.258286 systemd[1]: Reached target machines.target - Containers. Dec 12 17:28:25.258323 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:28:25.258359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:28:25.258434 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:28:25.258472 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:28:25.258506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:28:25.258535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:28:25.258572 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:28:25.258600 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:28:25.258633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:28:25.258662 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:28:25.258773 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:28:25.258818 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:28:25.258848 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:28:25.258880 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:28:25.258912 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:28:25.258942 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:28:25.258972 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:28:25.259004 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:28:25.259038 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:28:25.259079 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:28:25.259113 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:28:25.259147 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:28:25.259177 systemd[1]: Stopped verity-setup.service. Dec 12 17:28:25.259215 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:28:25.259246 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:28:25.259280 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:28:25.259310 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:28:25.259346 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:28:25.259914 systemd-journald[1527]: Collecting audit messages is disabled. Dec 12 17:28:25.260021 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:28:25.260057 systemd-journald[1527]: Journal started Dec 12 17:28:25.260105 systemd-journald[1527]: Runtime Journal (/run/log/journal/ec2a31d0b75c8565cac4cbbe6dbf7c82) is 8M, max 75.3M, 67.3M free. Dec 12 17:28:25.261378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:28:24.648643 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:28:24.680530 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 12 17:28:24.681598 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:28:25.279632 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:28:25.279580 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:28:25.291229 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:28:25.303997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:28:25.304600 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:28:25.308510 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:28:25.309605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:28:25.314492 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:28:25.325045 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:28:25.347825 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:28:25.376014 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:28:25.383835 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:28:25.392619 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:28:25.395625 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:28:25.395724 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:28:25.404918 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:28:25.417729 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:28:25.420781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:28:25.434725 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:28:25.446063 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:28:25.452474 kernel: loop: module loaded Dec 12 17:28:25.450991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:28:25.453280 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:28:25.463848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:28:25.473947 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:28:25.478457 kernel: fuse: init (API version 7.41) Dec 12 17:28:25.493008 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:28:25.500369 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:28:25.507673 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:28:25.511553 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:28:25.512032 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:28:25.520045 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:28:25.525771 kernel: ACPI: bus type drm_connector registered Dec 12 17:28:25.523867 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:28:25.532090 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:28:25.533934 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:28:25.559581 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:28:25.563470 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:28:25.594048 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:28:25.604445 kernel: loop0: detected capacity change from 0 to 61264 Dec 12 17:28:25.613738 systemd-journald[1527]: Time spent on flushing to /var/log/journal/ec2a31d0b75c8565cac4cbbe6dbf7c82 is 176.939ms for 925 entries. Dec 12 17:28:25.613738 systemd-journald[1527]: System Journal (/var/log/journal/ec2a31d0b75c8565cac4cbbe6dbf7c82) is 8M, max 195.6M, 187.6M free. Dec 12 17:28:25.808023 systemd-journald[1527]: Received client request to flush runtime journal. Dec 12 17:28:25.808099 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:28:25.808144 kernel: loop1: detected capacity change from 0 to 119840 Dec 12 17:28:25.634589 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:28:25.638134 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:28:25.648834 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:28:25.732457 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:28:25.735461 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:28:25.751864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:28:25.797775 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:28:25.808663 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:28:25.818591 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:28:25.874903 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:28:25.904448 kernel: loop2: detected capacity change from 0 to 100632 Dec 12 17:28:25.910269 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Dec 12 17:28:25.910930 systemd-tmpfiles[1589]: ACLs are not supported, ignoring. Dec 12 17:28:25.922183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:28:26.025438 kernel: loop3: detected capacity change from 0 to 200800 Dec 12 17:28:26.078438 kernel: loop4: detected capacity change from 0 to 61264 Dec 12 17:28:26.113437 kernel: loop5: detected capacity change from 0 to 119840 Dec 12 17:28:26.136451 kernel: loop6: detected capacity change from 0 to 100632 Dec 12 17:28:26.156223 kernel: loop7: detected capacity change from 0 to 200800 Dec 12 17:28:26.192256 (sd-merge)[1600]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 12 17:28:26.193380 (sd-merge)[1600]: Merged extensions into '/usr'. Dec 12 17:28:26.206629 systemd[1]: Reload requested from client PID 1571 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:28:26.206664 systemd[1]: Reloading... Dec 12 17:28:26.443453 zram_generator::config[1626]: No configuration found. Dec 12 17:28:26.908733 systemd[1]: Reloading finished in 701 ms. Dec 12 17:28:26.942702 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:28:26.956784 systemd[1]: Starting ensure-sysext.service... Dec 12 17:28:26.963818 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:28:27.034280 systemd[1]: Reload requested from client PID 1677 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:28:27.034324 systemd[1]: Reloading... Dec 12 17:28:27.097694 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:28:27.097787 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:28:27.099894 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:28:27.100760 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:28:27.103764 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:28:27.104656 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Dec 12 17:28:27.104826 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Dec 12 17:28:27.117288 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:28:27.118034 systemd-tmpfiles[1678]: Skipping /boot Dec 12 17:28:27.156084 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:28:27.158423 systemd-tmpfiles[1678]: Skipping /boot Dec 12 17:28:27.244441 zram_generator::config[1702]: No configuration found. Dec 12 17:28:27.331040 ldconfig[1566]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:28:27.710594 systemd[1]: Reloading finished in 675 ms. Dec 12 17:28:27.742838 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:28:27.747551 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:28:27.774528 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:28:27.794713 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:28:27.801955 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:28:27.808747 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:28:27.818335 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:28:27.834191 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:28:27.845013 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:28:27.857279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:28:27.861028 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:28:27.874029 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:28:27.886696 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:28:27.889341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:28:27.889656 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:28:27.904068 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:28:27.910959 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:28:27.911483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:28:27.911765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:28:27.924065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:28:27.941041 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:28:27.945296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:28:27.945711 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:28:27.946113 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:28:27.959953 systemd[1]: Finished ensure-sysext.service. Dec 12 17:28:27.964460 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:28:27.982881 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:28:28.020258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:28:28.021726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:28:28.037892 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:28:28.042634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:28:28.043347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:28:28.050276 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:28:28.064262 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:28:28.066605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:28:28.073593 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:28:28.075589 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:28:28.080876 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:28:28.120594 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:28:28.154284 systemd-udevd[1765]: Using default interface naming scheme 'v255'. Dec 12 17:28:28.171565 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:28:28.176121 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:28:28.187720 augenrules[1801]: No rules Dec 12 17:28:28.191957 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:28:28.192782 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:28:28.222652 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:28:28.241884 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:28:28.257277 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:28:28.651362 systemd-networkd[1813]: lo: Link UP Dec 12 17:28:28.653469 systemd-networkd[1813]: lo: Gained carrier Dec 12 17:28:28.665031 (udev-worker)[1831]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:28:28.679219 systemd-resolved[1764]: Positive Trust Anchors: Dec 12 17:28:28.679267 systemd-resolved[1764]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:28:28.679333 systemd-resolved[1764]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:28:28.689672 systemd-networkd[1813]: Enumeration completed Dec 12 17:28:28.689906 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:28:28.695589 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:28:28.701054 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:28:28.719949 systemd-resolved[1764]: Defaulting to hostname 'linux'. Dec 12 17:28:28.730483 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:28:28.733622 systemd[1]: Reached target network.target - Network. Dec 12 17:28:28.735925 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:28:28.739634 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:28:28.742865 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:28:28.746014 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:28:28.750960 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:28:28.754803 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:28:28.757879 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:28:28.760935 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:28:28.760999 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:28:28.763621 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:28:28.780508 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:28:28.789351 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:28:28.798852 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:28:28.802309 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:28:28.805641 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:28:28.828765 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:28:28.832210 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:28:28.838765 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:28:28.843625 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:28:28.846178 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:28:28.848766 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:28:28.848844 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:28:28.852824 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:28:28.860965 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 17:28:28.868847 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:28:28.875933 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:28:28.887051 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:28:28.916877 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:28:28.919507 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:28:28.926626 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:28:28.951007 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 17:28:28.958013 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:28:28.973274 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 12 17:28:28.983994 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:28:29.009826 jq[1859]: false Dec 12 17:28:29.017746 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:28:29.037730 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:28:29.044272 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:28:29.047211 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:28:29.063923 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:28:29.076973 systemd-networkd[1813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:28:29.077004 systemd-networkd[1813]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:28:29.098775 systemd-networkd[1813]: eth0: Link UP Dec 12 17:28:29.099247 systemd-networkd[1813]: eth0: Gained carrier Dec 12 17:28:29.099297 systemd-networkd[1813]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:28:29.102895 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:28:29.127974 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:28:29.150582 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:28:29.155472 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:28:29.156573 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:28:29.164951 systemd-networkd[1813]: eth0: DHCPv4 address 172.31.24.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 12 17:28:29.172778 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 12 17:28:29.213954 jq[1876]: true Dec 12 17:28:29.227284 ntpd[1867]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 12 17:28:29.236703 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 12 17:28:29.236703 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 17:28:29.236703 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: ---------------------------------------------------- Dec 12 17:28:29.236685 ntpd[1867]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 17:28:29.237835 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Dec 12 17:28:29.237835 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 17:28:29.237835 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: corporation. Support and training for ntp-4 are Dec 12 17:28:29.237835 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: available at https://www.nwtime.org/support Dec 12 17:28:29.237835 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: ---------------------------------------------------- Dec 12 17:28:29.236716 ntpd[1867]: ---------------------------------------------------- Dec 12 17:28:29.236738 ntpd[1867]: ntp-4 is maintained by Network Time Foundation, Dec 12 17:28:29.236756 ntpd[1867]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 17:28:29.236774 ntpd[1867]: corporation. Support and training for ntp-4 are Dec 12 17:28:29.236796 ntpd[1867]: available at https://www.nwtime.org/support Dec 12 17:28:29.236814 ntpd[1867]: ---------------------------------------------------- Dec 12 17:28:29.241256 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:28:29.241885 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:28:29.259688 ntpd[1867]: proto: precision = 0.096 usec (-23) Dec 12 17:28:29.264943 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: proto: precision = 0.096 usec (-23) Dec 12 17:28:29.278967 extend-filesystems[1860]: Found /dev/nvme0n1p6 Dec 12 17:28:29.284848 ntpd[1867]: basedate set to 2025-11-30 Dec 12 17:28:29.285814 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: basedate set to 2025-11-30 Dec 12 17:28:29.285814 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: gps base set to 2025-11-30 (week 2395) Dec 12 17:28:29.285814 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 17:28:29.285814 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 17:28:29.284892 ntpd[1867]: gps base set to 2025-11-30 (week 2395) Dec 12 17:28:29.285140 ntpd[1867]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 17:28:29.285200 ntpd[1867]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 17:28:29.301451 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 17:28:29.301807 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 17:28:29.301807 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: Listen normally on 3 eth0 172.31.24.26:123 Dec 12 17:28:29.301807 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: Listen normally on 4 lo [::1]:123 Dec 12 17:28:29.301807 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: bind(21) AF_INET6 [fe80::4de:5eff:fe75:a2bb%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 17:28:29.301807 ntpd[1867]: 12 Dec 17:28:29 ntpd[1867]: unable to create socket on eth0 (5) for [fe80::4de:5eff:fe75:a2bb%2]:123 Dec 12 17:28:29.301534 ntpd[1867]: Listen normally on 3 eth0 172.31.24.26:123 Dec 12 17:28:29.301590 ntpd[1867]: Listen normally on 4 lo [::1]:123 Dec 12 17:28:29.301645 ntpd[1867]: bind(21) AF_INET6 [fe80::4de:5eff:fe75:a2bb%2]:123 flags 0x811 failed: Cannot assign requested address Dec 12 17:28:29.301688 ntpd[1867]: unable to create socket on eth0 (5) for [fe80::4de:5eff:fe75:a2bb%2]:123 Dec 12 17:28:29.317596 systemd-coredump[1906]: Process 1867 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 12 17:28:29.325437 extend-filesystems[1860]: Found /dev/nvme0n1p9 Dec 12 17:28:29.343295 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 12 17:28:29.350490 extend-filesystems[1860]: Checking size of /dev/nvme0n1p9 Dec 12 17:28:29.358919 (ntainerd)[1901]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:28:29.359443 systemd[1]: Started systemd-coredump@0-1906-0.service - Process Core Dump (PID 1906/UID 0). Dec 12 17:28:29.413176 jq[1896]: true Dec 12 17:28:29.420931 dbus-daemon[1856]: [system] SELinux support is enabled Dec 12 17:28:29.421911 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:28:29.433358 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:28:29.435513 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:28:29.438731 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:28:29.438776 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:28:29.445266 dbus-daemon[1856]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1813 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 12 17:28:29.445996 tar[1883]: linux-arm64/LICENSE Dec 12 17:28:29.448880 tar[1883]: linux-arm64/helm Dec 12 17:28:29.456828 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 12 17:28:29.482298 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:28:29.487660 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:28:29.499966 update_engine[1875]: I20251212 17:28:29.494062 1875 main.cc:92] Flatcar Update Engine starting Dec 12 17:28:29.532244 update_engine[1875]: I20251212 17:28:29.528227 1875 update_check_scheduler.cc:74] Next update check in 9m8s Dec 12 17:28:29.540704 extend-filesystems[1860]: Resized partition /dev/nvme0n1p9 Dec 12 17:28:29.563174 extend-filesystems[1934]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:28:29.591695 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:28:29.596104 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:28:29.612451 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 12 17:28:29.652019 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:28:29.655619 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 12 17:28:29.743557 coreos-metadata[1855]: Dec 12 17:28:29.742 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 17:28:29.752972 coreos-metadata[1855]: Dec 12 17:28:29.752 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 12 17:28:29.769584 coreos-metadata[1855]: Dec 12 17:28:29.759 INFO Fetch successful Dec 12 17:28:29.769584 coreos-metadata[1855]: Dec 12 17:28:29.765 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 12 17:28:29.770964 coreos-metadata[1855]: Dec 12 17:28:29.770 INFO Fetch successful Dec 12 17:28:29.773204 coreos-metadata[1855]: Dec 12 17:28:29.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 12 17:28:29.784946 coreos-metadata[1855]: Dec 12 17:28:29.783 INFO Fetch successful Dec 12 17:28:29.784946 coreos-metadata[1855]: Dec 12 17:28:29.784 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 12 17:28:29.799633 coreos-metadata[1855]: Dec 12 17:28:29.789 INFO Fetch successful Dec 12 17:28:29.799633 coreos-metadata[1855]: Dec 12 17:28:29.791 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 12 17:28:29.799633 coreos-metadata[1855]: Dec 12 17:28:29.797 INFO Fetch failed with 404: resource not found Dec 12 17:28:29.799633 coreos-metadata[1855]: Dec 12 17:28:29.799 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 12 17:28:29.809934 coreos-metadata[1855]: Dec 12 17:28:29.803 INFO Fetch successful Dec 12 17:28:29.814575 coreos-metadata[1855]: Dec 12 17:28:29.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 12 17:28:29.814575 coreos-metadata[1855]: Dec 12 17:28:29.814 INFO Fetch successful Dec 12 17:28:29.817801 coreos-metadata[1855]: Dec 12 17:28:29.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 12 17:28:29.824634 coreos-metadata[1855]: Dec 12 17:28:29.819 INFO Fetch successful Dec 12 17:28:29.824634 coreos-metadata[1855]: Dec 12 17:28:29.821 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 12 17:28:29.837581 coreos-metadata[1855]: Dec 12 17:28:29.833 INFO Fetch successful Dec 12 17:28:29.837581 coreos-metadata[1855]: Dec 12 17:28:29.834 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 12 17:28:29.838381 coreos-metadata[1855]: Dec 12 17:28:29.838 INFO Fetch successful Dec 12 17:28:29.870212 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 12 17:28:29.881433 extend-filesystems[1934]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 12 17:28:29.881433 extend-filesystems[1934]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 12 17:28:29.881433 extend-filesystems[1934]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 12 17:28:29.905721 extend-filesystems[1860]: Resized filesystem in /dev/nvme0n1p9 Dec 12 17:28:29.904562 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:28:29.926092 bash[1966]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:28:29.915684 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:28:29.917113 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:28:29.929962 systemd[1]: Starting sshkeys.service... Dec 12 17:28:29.973036 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 17:28:29.983572 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 17:28:30.026173 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 17:28:30.030669 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:28:30.138343 systemd-logind[1874]: New seat seat0. Dec 12 17:28:30.143521 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:28:30.320453 containerd[1901]: time="2025-12-12T17:28:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:28:30.336822 containerd[1901]: time="2025-12-12T17:28:30.329641559Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:28:30.338825 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 12 17:28:30.352970 dbus-daemon[1856]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 12 17:28:30.355751 systemd-coredump[1909]: Process 1867 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1867: #0 0x0000aaaae07a0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaae074fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaae0750240 n/a (ntpd + 0x10240) #3 0x0000aaaae074be14 n/a (ntpd + 0xbe14) #4 0x0000aaaae074d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaae0755a38 n/a (ntpd + 0x15a38) #6 0x0000aaaae074738c n/a (ntpd + 0x738c) #7 0x0000ffffaca22034 n/a (libc.so.6 + 0x22034) #8 0x0000ffffaca22118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaae07473f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Dec 12 17:28:30.363805 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 12 17:28:30.363117 dbus-daemon[1856]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1924 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 12 17:28:30.364304 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 12 17:28:30.381197 systemd[1]: systemd-coredump@0-1906-0.service: Deactivated successfully. Dec 12 17:28:30.401202 systemd[1]: Starting polkit.service - Authorization Manager... Dec 12 17:28:30.450943 containerd[1901]: time="2025-12-12T17:28:30.450876648Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.772µs" Dec 12 17:28:30.451153 containerd[1901]: time="2025-12-12T17:28:30.451116036Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:28:30.451271 containerd[1901]: time="2025-12-12T17:28:30.451240932Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:28:30.456439 containerd[1901]: time="2025-12-12T17:28:30.453839844Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:28:30.456439 containerd[1901]: time="2025-12-12T17:28:30.453918696Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:28:30.456439 containerd[1901]: time="2025-12-12T17:28:30.453982860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:28:30.456439 containerd[1901]: time="2025-12-12T17:28:30.454134420Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:28:30.456439 containerd[1901]: time="2025-12-12T17:28:30.454165428Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:28:30.459751 containerd[1901]: time="2025-12-12T17:28:30.458686284Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:28:30.459751 containerd[1901]: time="2025-12-12T17:28:30.458757504Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:28:30.459751 containerd[1901]: time="2025-12-12T17:28:30.458793888Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:28:30.459751 containerd[1901]: time="2025-12-12T17:28:30.458815944Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:28:30.459751 containerd[1901]: time="2025-12-12T17:28:30.459057060Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:28:30.459751 containerd[1901]: time="2025-12-12T17:28:30.459561300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:28:30.459751 containerd[1901]: time="2025-12-12T17:28:30.459640944Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:28:30.459751 containerd[1901]: time="2025-12-12T17:28:30.459670512Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:28:30.465435 containerd[1901]: time="2025-12-12T17:28:30.464209308Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:28:30.465435 containerd[1901]: time="2025-12-12T17:28:30.465153456Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:28:30.467346 containerd[1901]: time="2025-12-12T17:28:30.465367188Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.478618296Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.478801500Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.478887456Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.479000700Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.479060904Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.479088480Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.479149452Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.479217204Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.479269260Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:28:30.479414 containerd[1901]: time="2025-12-12T17:28:30.479329200Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:28:30.482421 containerd[1901]: time="2025-12-12T17:28:30.479355312Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:28:30.482421 containerd[1901]: time="2025-12-12T17:28:30.480131352Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:28:30.482564 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 12 17:28:30.490422 coreos-metadata[1984]: Dec 12 17:28:30.484 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.483027168Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.485176188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.485233932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.485264604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.485294304Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.485321988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.485357904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.489436908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.489515940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.489547020Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.489584880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.489974364Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.490017828Z" level=info msg="Start snapshots syncer" Dec 12 17:28:30.490982 containerd[1901]: time="2025-12-12T17:28:30.490061820Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:28:30.492373 coreos-metadata[1984]: Dec 12 17:28:30.492 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 12 17:28:30.495755 systemd[1]: Started ntpd.service - Network Time Service. Dec 12 17:28:30.502364 coreos-metadata[1984]: Dec 12 17:28:30.501 INFO Fetch successful Dec 12 17:28:30.502364 coreos-metadata[1984]: Dec 12 17:28:30.501 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 17:28:30.502595 containerd[1901]: time="2025-12-12T17:28:30.497546412Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:28:30.502595 containerd[1901]: time="2025-12-12T17:28:30.497727564Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:28:30.503030 containerd[1901]: time="2025-12-12T17:28:30.499927056Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.503539620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504556488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504603756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504631956Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504663696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504692580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504723840Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504784164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504851256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504885336Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504945480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.504997968Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:28:30.507367 containerd[1901]: time="2025-12-12T17:28:30.505025112Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:28:30.508228 coreos-metadata[1984]: Dec 12 17:28:30.506 INFO Fetch successful Dec 12 17:28:30.508336 containerd[1901]: time="2025-12-12T17:28:30.505052688Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:28:30.508336 containerd[1901]: time="2025-12-12T17:28:30.505074840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:28:30.508336 containerd[1901]: time="2025-12-12T17:28:30.505101048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:28:30.508336 containerd[1901]: time="2025-12-12T17:28:30.505128540Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:28:30.508336 containerd[1901]: time="2025-12-12T17:28:30.505317036Z" level=info msg="runtime interface created" Dec 12 17:28:30.508336 containerd[1901]: time="2025-12-12T17:28:30.505339488Z" level=info msg="created NRI interface" Dec 12 17:28:30.508336 containerd[1901]: time="2025-12-12T17:28:30.505377672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:28:30.508336 containerd[1901]: time="2025-12-12T17:28:30.505450116Z" level=info msg="Connect containerd service" Dec 12 17:28:30.508336 containerd[1901]: time="2025-12-12T17:28:30.505548948Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:28:30.519869 unknown[1984]: wrote ssh authorized keys file for user: core Dec 12 17:28:30.525873 containerd[1901]: time="2025-12-12T17:28:30.524687412Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:28:30.574520 update-ssh-keys[2043]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:28:30.576764 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 17:28:30.589382 systemd[1]: Finished sshkeys.service. Dec 12 17:28:30.622258 systemd-networkd[1813]: eth0: Gained IPv6LL Dec 12 17:28:30.665538 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:28:30.672502 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:28:30.682085 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 12 17:28:30.694970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:28:30.778060 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:28:30.878227 locksmithd[1935]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:28:30.942980 ntpd[2031]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 12 17:28:30.944657 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 12 17:28:30.947420 ntpd[2031]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 17:28:30.949896 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 12 17:28:30.949896 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: ---------------------------------------------------- Dec 12 17:28:30.949896 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: ntp-4 is maintained by Network Time Foundation, Dec 12 17:28:30.949896 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 17:28:30.949896 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: corporation. Support and training for ntp-4 are Dec 12 17:28:30.949896 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: available at https://www.nwtime.org/support Dec 12 17:28:30.949896 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: ---------------------------------------------------- Dec 12 17:28:30.947493 ntpd[2031]: ---------------------------------------------------- Dec 12 17:28:30.947512 ntpd[2031]: ntp-4 is maintained by Network Time Foundation, Dec 12 17:28:30.947531 ntpd[2031]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 12 17:28:30.947550 ntpd[2031]: corporation. Support and training for ntp-4 are Dec 12 17:28:30.947568 ntpd[2031]: available at https://www.nwtime.org/support Dec 12 17:28:30.947587 ntpd[2031]: ---------------------------------------------------- Dec 12 17:28:30.955307 ntpd[2031]: proto: precision = 0.096 usec (-23) Dec 12 17:28:30.955556 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: proto: precision = 0.096 usec (-23) Dec 12 17:28:30.955715 ntpd[2031]: basedate set to 2025-11-30 Dec 12 17:28:30.955755 ntpd[2031]: gps base set to 2025-11-30 (week 2395) Dec 12 17:28:30.955886 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: basedate set to 2025-11-30 Dec 12 17:28:30.955886 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: gps base set to 2025-11-30 (week 2395) Dec 12 17:28:30.955986 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 17:28:30.955986 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 17:28:30.955902 ntpd[2031]: Listen and drop on 0 v6wildcard [::]:123 Dec 12 17:28:30.955947 ntpd[2031]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 12 17:28:30.956236 ntpd[2031]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 17:28:30.956301 ntpd[2031]: Listen normally on 3 eth0 172.31.24.26:123 Dec 12 17:28:30.956427 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: Listen normally on 2 lo 127.0.0.1:123 Dec 12 17:28:30.956427 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: Listen normally on 3 eth0 172.31.24.26:123 Dec 12 17:28:30.956427 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: Listen normally on 4 lo [::1]:123 Dec 12 17:28:30.956349 ntpd[2031]: Listen normally on 4 lo [::1]:123 Dec 12 17:28:30.968201 ntpd[2031]: Listen normally on 5 eth0 [fe80::4de:5eff:fe75:a2bb%2]:123 Dec 12 17:28:30.968876 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: Listen normally on 5 eth0 [fe80::4de:5eff:fe75:a2bb%2]:123 Dec 12 17:28:30.968876 ntpd[2031]: 12 Dec 17:28:30 ntpd[2031]: Listening on routing socket on fd #22 for interface updates Dec 12 17:28:30.968307 ntpd[2031]: Listening on routing socket on fd #22 for interface updates Dec 12 17:28:31.030806 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:28:31.049058 ntpd[2031]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:28:31.053051 ntpd[2031]: 12 Dec 17:28:31 ntpd[2031]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:28:31.053051 ntpd[2031]: 12 Dec 17:28:31 ntpd[2031]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:28:31.049114 ntpd[2031]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 12 17:28:31.176903 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 12 17:28:31.193057 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:28:31.217426 containerd[1901]: time="2025-12-12T17:28:31.214503383Z" level=info msg="Start subscribing containerd event" Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.225589056Z" level=info msg="Start recovering state" Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.225816168Z" level=info msg="Start event monitor" Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.225856896Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.225879108Z" level=info msg="Start streaming server" Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.225902040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.225953700Z" level=info msg="runtime interface starting up..." Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.225972408Z" level=info msg="starting plugins..." Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.226005456Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.215927339Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:28:31.226424 containerd[1901]: time="2025-12-12T17:28:31.226320996Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:28:31.236983 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:28:31.248775 containerd[1901]: time="2025-12-12T17:28:31.246502608Z" level=info msg="containerd successfully booted in 0.930816s" Dec 12 17:28:31.268564 amazon-ssm-agent[2058]: Initializing new seelog logger Dec 12 17:28:31.270086 amazon-ssm-agent[2058]: New Seelog Logger Creation Complete Dec 12 17:28:31.274722 amazon-ssm-agent[2058]: 2025/12/12 17:28:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:31.274722 amazon-ssm-agent[2058]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:31.274722 amazon-ssm-agent[2058]: 2025/12/12 17:28:31 processing appconfig overrides Dec 12 17:28:31.279500 amazon-ssm-agent[2058]: 2025/12/12 17:28:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:31.279674 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.2791 INFO Proxy environment variables: Dec 12 17:28:31.281969 amazon-ssm-agent[2058]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:31.282183 amazon-ssm-agent[2058]: 2025/12/12 17:28:31 processing appconfig overrides Dec 12 17:28:31.288799 amazon-ssm-agent[2058]: 2025/12/12 17:28:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:31.288799 amazon-ssm-agent[2058]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:31.288799 amazon-ssm-agent[2058]: 2025/12/12 17:28:31 processing appconfig overrides Dec 12 17:28:31.308562 amazon-ssm-agent[2058]: 2025/12/12 17:28:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:31.308838 amazon-ssm-agent[2058]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:31.309782 amazon-ssm-agent[2058]: 2025/12/12 17:28:31 processing appconfig overrides Dec 12 17:28:31.335119 sshd_keygen[1895]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:28:31.385456 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.2791 INFO https_proxy: Dec 12 17:28:31.388226 polkitd[2027]: Started polkitd version 126 Dec 12 17:28:31.441566 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:28:31.459135 polkitd[2027]: Loading rules from directory /etc/polkit-1/rules.d Dec 12 17:28:31.465216 polkitd[2027]: Loading rules from directory /run/polkit-1/rules.d Dec 12 17:28:31.465572 polkitd[2027]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 17:28:31.472501 polkitd[2027]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 12 17:28:31.473107 polkitd[2027]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 12 17:28:31.476458 polkitd[2027]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 12 17:28:31.480171 polkitd[2027]: Finished loading, compiling and executing 2 rules Dec 12 17:28:31.481777 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.2791 INFO http_proxy: Dec 12 17:28:31.482760 systemd[1]: Started polkit.service - Authorization Manager. Dec 12 17:28:31.493222 dbus-daemon[1856]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 12 17:28:31.500219 polkitd[2027]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 12 17:28:31.574806 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:28:31.591095 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.2791 INFO no_proxy: Dec 12 17:28:31.584013 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:28:31.592835 systemd[1]: Started sshd@0-172.31.24.26:22-147.75.109.163:45880.service - OpenSSH per-connection server daemon (147.75.109.163:45880). Dec 12 17:28:31.620525 systemd-hostnamed[1924]: Hostname set to (transient) Dec 12 17:28:31.620535 systemd-resolved[1764]: System hostname changed to 'ip-172-31-24-26'. Dec 12 17:28:31.693791 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.2831 INFO Checking if agent identity type OnPrem can be assumed Dec 12 17:28:31.753234 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:28:31.756702 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:28:31.765561 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:28:31.797485 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.2852 INFO Checking if agent identity type EC2 can be assumed Dec 12 17:28:31.857535 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:28:31.871640 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:28:31.879283 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 12 17:28:31.882703 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:28:31.898500 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.6678 INFO Agent will take identity from EC2 Dec 12 17:28:31.999901 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.6797 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 12 17:28:32.004946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:28:32.023647 systemd-logind[1874]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 12 17:28:32.027128 systemd-logind[1874]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:28:32.108846 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.6798 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 12 17:28:32.209542 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.6798 INFO [amazon-ssm-agent] Starting Core Agent Dec 12 17:28:32.237538 sshd[2124]: Accepted publickey for core from 147.75.109.163 port 45880 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:28:32.261640 sshd-session[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:32.313609 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:28:32.332581 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.6798 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 12 17:28:32.324883 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:28:32.383805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:28:32.424419 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.6798 INFO [Registrar] Starting registrar module Dec 12 17:28:32.432497 systemd-logind[1874]: New session 1 of user core. Dec 12 17:28:32.481648 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:28:32.495909 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:28:32.526500 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.7038 INFO [EC2Identity] Checking disk for registration info Dec 12 17:28:32.567726 (systemd)[2155]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:28:32.582918 systemd-logind[1874]: New session c1 of user core. Dec 12 17:28:32.623522 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.7039 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 12 17:28:32.682018 tar[1883]: linux-arm64/README.md Dec 12 17:28:32.728435 amazon-ssm-agent[2058]: 2025-12-12 17:28:31.7039 INFO [EC2Identity] Generating registration keypair Dec 12 17:28:32.781587 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:28:33.086577 systemd[2155]: Queued start job for default target default.target. Dec 12 17:28:33.092291 systemd[2155]: Created slice app.slice - User Application Slice. Dec 12 17:28:33.092429 systemd[2155]: Reached target paths.target - Paths. Dec 12 17:28:33.092569 systemd[2155]: Reached target timers.target - Timers. Dec 12 17:28:33.096885 systemd[2155]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:28:33.176158 systemd[2155]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:28:33.176316 systemd[2155]: Reached target sockets.target - Sockets. Dec 12 17:28:33.176475 systemd[2155]: Reached target basic.target - Basic System. Dec 12 17:28:33.176588 systemd[2155]: Reached target default.target - Main User Target. Dec 12 17:28:33.176664 systemd[2155]: Startup finished in 554ms. Dec 12 17:28:33.177056 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:28:33.201204 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:28:33.379285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:28:33.384884 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:28:33.393168 systemd[1]: Startup finished in 3.780s (kernel) + 9.435s (initrd) + 10.324s (userspace) = 23.540s. Dec 12 17:28:33.421479 systemd[1]: Started sshd@1-172.31.24.26:22-147.75.109.163:54436.service - OpenSSH per-connection server daemon (147.75.109.163:54436). Dec 12 17:28:33.424077 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:28:33.712077 sshd[2251]: Accepted publickey for core from 147.75.109.163 port 54436 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:28:33.714651 sshd-session[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:33.727719 systemd-logind[1874]: New session 2 of user core. Dec 12 17:28:33.736262 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:28:33.872499 sshd[2262]: Connection closed by 147.75.109.163 port 54436 Dec 12 17:28:33.875225 sshd-session[2251]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:33.887800 systemd[1]: sshd@1-172.31.24.26:22-147.75.109.163:54436.service: Deactivated successfully. Dec 12 17:28:33.893309 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:28:33.902874 systemd-logind[1874]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:28:33.924980 systemd[1]: Started sshd@2-172.31.24.26:22-147.75.109.163:54446.service - OpenSSH per-connection server daemon (147.75.109.163:54446). Dec 12 17:28:33.931515 systemd-logind[1874]: Removed session 2. Dec 12 17:28:34.182865 sshd[2268]: Accepted publickey for core from 147.75.109.163 port 54446 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:28:34.186570 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:34.205064 systemd-logind[1874]: New session 3 of user core. Dec 12 17:28:34.210737 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:28:34.337684 sshd[2272]: Connection closed by 147.75.109.163 port 54446 Dec 12 17:28:34.337477 sshd-session[2268]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:34.349085 systemd[1]: sshd@2-172.31.24.26:22-147.75.109.163:54446.service: Deactivated successfully. Dec 12 17:28:34.354121 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:28:34.365110 systemd-logind[1874]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:28:34.388045 systemd[1]: Started sshd@3-172.31.24.26:22-147.75.109.163:54456.service - OpenSSH per-connection server daemon (147.75.109.163:54456). Dec 12 17:28:34.390717 systemd-logind[1874]: Removed session 3. Dec 12 17:28:34.479487 kubelet[2249]: E1212 17:28:34.479252 2249 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:28:34.485784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:28:34.486128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:28:34.487122 systemd[1]: kubelet.service: Consumed 1.423s CPU time, 247.5M memory peak. Dec 12 17:28:34.609632 sshd[2278]: Accepted publickey for core from 147.75.109.163 port 54456 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:28:34.611375 sshd-session[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:34.623792 systemd-logind[1874]: New session 4 of user core. Dec 12 17:28:34.626759 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:28:34.757159 sshd[2282]: Connection closed by 147.75.109.163 port 54456 Dec 12 17:28:34.756938 sshd-session[2278]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:34.769190 systemd[1]: sshd@3-172.31.24.26:22-147.75.109.163:54456.service: Deactivated successfully. Dec 12 17:28:34.774251 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:28:34.779674 systemd-logind[1874]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:28:34.801834 systemd[1]: Started sshd@4-172.31.24.26:22-147.75.109.163:54458.service - OpenSSH per-connection server daemon (147.75.109.163:54458). Dec 12 17:28:34.805350 systemd-logind[1874]: Removed session 4. Dec 12 17:28:34.999269 sshd[2288]: Accepted publickey for core from 147.75.109.163 port 54458 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:28:35.000369 sshd-session[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:35.011493 systemd-logind[1874]: New session 5 of user core. Dec 12 17:28:35.016701 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:28:35.154846 sudo[2292]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:28:35.155573 sudo[2292]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:28:35.176951 sudo[2292]: pam_unix(sudo:session): session closed for user root Dec 12 17:28:35.199704 sshd[2291]: Connection closed by 147.75.109.163 port 54458 Dec 12 17:28:35.200777 sshd-session[2288]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:35.210770 systemd[1]: sshd@4-172.31.24.26:22-147.75.109.163:54458.service: Deactivated successfully. Dec 12 17:28:35.215687 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:28:35.222002 systemd-logind[1874]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:28:35.238738 systemd[1]: Started sshd@5-172.31.24.26:22-147.75.109.163:54460.service - OpenSSH per-connection server daemon (147.75.109.163:54460). Dec 12 17:28:35.241556 systemd-logind[1874]: Removed session 5. Dec 12 17:28:35.431434 sshd[2298]: Accepted publickey for core from 147.75.109.163 port 54460 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:28:35.434500 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:35.445485 systemd-logind[1874]: New session 6 of user core. Dec 12 17:28:35.448679 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:28:35.554122 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:28:35.554850 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:28:35.567090 sudo[2303]: pam_unix(sudo:session): session closed for user root Dec 12 17:28:35.577991 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:28:35.579075 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:28:35.600018 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:28:35.668574 augenrules[2325]: No rules Dec 12 17:28:35.670864 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:28:35.671457 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:28:35.673663 sudo[2302]: pam_unix(sudo:session): session closed for user root Dec 12 17:28:35.685213 amazon-ssm-agent[2058]: 2025-12-12 17:28:35.6844 INFO [EC2Identity] Checking write access before registering Dec 12 17:28:35.697420 sshd[2301]: Connection closed by 147.75.109.163 port 54460 Dec 12 17:28:35.698474 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:35.707578 systemd-logind[1874]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:28:35.708281 systemd[1]: sshd@5-172.31.24.26:22-147.75.109.163:54460.service: Deactivated successfully. Dec 12 17:28:35.712145 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:28:35.716255 systemd-logind[1874]: Removed session 6. Dec 12 17:28:35.733878 systemd[1]: Started sshd@6-172.31.24.26:22-147.75.109.163:54470.service - OpenSSH per-connection server daemon (147.75.109.163:54470). Dec 12 17:28:35.750635 amazon-ssm-agent[2058]: 2025/12/12 17:28:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:35.750635 amazon-ssm-agent[2058]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 12 17:28:35.750843 amazon-ssm-agent[2058]: 2025/12/12 17:28:35 processing appconfig overrides Dec 12 17:28:35.785896 amazon-ssm-agent[2058]: 2025-12-12 17:28:35.6867 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 12 17:28:35.788987 amazon-ssm-agent[2058]: 2025-12-12 17:28:35.7502 INFO [EC2Identity] EC2 registration was successful. Dec 12 17:28:35.788987 amazon-ssm-agent[2058]: 2025-12-12 17:28:35.7503 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 12 17:28:35.788987 amazon-ssm-agent[2058]: 2025-12-12 17:28:35.7504 INFO [CredentialRefresher] credentialRefresher has started Dec 12 17:28:35.788987 amazon-ssm-agent[2058]: 2025-12-12 17:28:35.7505 INFO [CredentialRefresher] Starting credentials refresher loop Dec 12 17:28:35.788987 amazon-ssm-agent[2058]: 2025-12-12 17:28:35.7885 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 12 17:28:35.789303 amazon-ssm-agent[2058]: 2025-12-12 17:28:35.7888 INFO [CredentialRefresher] Credentials ready Dec 12 17:28:35.885752 amazon-ssm-agent[2058]: 2025-12-12 17:28:35.7890 INFO [CredentialRefresher] Next credential rotation will be in 29.9999920396 minutes Dec 12 17:28:35.925252 sshd[2334]: Accepted publickey for core from 147.75.109.163 port 54470 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:28:35.927821 sshd-session[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:35.938494 systemd-logind[1874]: New session 7 of user core. Dec 12 17:28:35.942671 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:28:36.048239 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:28:36.049512 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:28:36.820851 amazon-ssm-agent[2058]: 2025-12-12 17:28:36.8206 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 12 17:28:36.884131 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:28:36.898005 (dockerd)[2364]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:28:36.921631 amazon-ssm-agent[2058]: 2025-12-12 17:28:36.8269 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2360) started Dec 12 17:28:37.030040 amazon-ssm-agent[2058]: 2025-12-12 17:28:36.8270 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 12 17:28:37.445176 dockerd[2364]: time="2025-12-12T17:28:37.444418722Z" level=info msg="Starting up" Dec 12 17:28:37.446255 dockerd[2364]: time="2025-12-12T17:28:37.446198214Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:28:37.467183 dockerd[2364]: time="2025-12-12T17:28:37.467121091Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:28:37.504683 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2823553462-merged.mount: Deactivated successfully. Dec 12 17:28:37.535130 systemd[1]: var-lib-docker-metacopy\x2dcheck1320850879-merged.mount: Deactivated successfully. Dec 12 17:28:37.549968 dockerd[2364]: time="2025-12-12T17:28:37.549908539Z" level=info msg="Loading containers: start." Dec 12 17:28:37.565451 kernel: Initializing XFRM netlink socket Dec 12 17:28:37.902655 (udev-worker)[2396]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:28:38.280234 systemd-resolved[1764]: Clock change detected. Flushing caches. Dec 12 17:28:38.314628 systemd-networkd[1813]: docker0: Link UP Dec 12 17:28:38.329441 dockerd[2364]: time="2025-12-12T17:28:38.329395572Z" level=info msg="Loading containers: done." Dec 12 17:28:38.365252 dockerd[2364]: time="2025-12-12T17:28:38.364716312Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:28:38.365252 dockerd[2364]: time="2025-12-12T17:28:38.364838220Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:28:38.365252 dockerd[2364]: time="2025-12-12T17:28:38.364983216Z" level=info msg="Initializing buildkit" Dec 12 17:28:38.423651 dockerd[2364]: time="2025-12-12T17:28:38.423602424Z" level=info msg="Completed buildkit initialization" Dec 12 17:28:38.441684 dockerd[2364]: time="2025-12-12T17:28:38.441604524Z" level=info msg="Daemon has completed initialization" Dec 12 17:28:38.442024 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:28:38.443472 dockerd[2364]: time="2025-12-12T17:28:38.442183164Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:28:39.373030 containerd[1901]: time="2025-12-12T17:28:39.372896077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 17:28:40.091084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322268244.mount: Deactivated successfully. Dec 12 17:28:41.543689 containerd[1901]: time="2025-12-12T17:28:41.543603448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:41.545513 containerd[1901]: time="2025-12-12T17:28:41.545439148Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Dec 12 17:28:41.548129 containerd[1901]: time="2025-12-12T17:28:41.548052340Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:41.555341 containerd[1901]: time="2025-12-12T17:28:41.553658140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:41.555728 containerd[1901]: time="2025-12-12T17:28:41.555682648Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.182716275s" Dec 12 17:28:41.555860 containerd[1901]: time="2025-12-12T17:28:41.555832756Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 12 17:28:41.556678 containerd[1901]: time="2025-12-12T17:28:41.556629472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 17:28:42.870332 containerd[1901]: time="2025-12-12T17:28:42.870230334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:42.872949 containerd[1901]: time="2025-12-12T17:28:42.872868078Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Dec 12 17:28:42.875638 containerd[1901]: time="2025-12-12T17:28:42.875567526Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:42.881111 containerd[1901]: time="2025-12-12T17:28:42.881013390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:42.883352 containerd[1901]: time="2025-12-12T17:28:42.882964386Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.326275106s" Dec 12 17:28:42.883352 containerd[1901]: time="2025-12-12T17:28:42.883024806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 12 17:28:42.884546 containerd[1901]: time="2025-12-12T17:28:42.884486838Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 17:28:43.944417 containerd[1901]: time="2025-12-12T17:28:43.944196980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:43.947192 containerd[1901]: time="2025-12-12T17:28:43.947110052Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Dec 12 17:28:43.950109 containerd[1901]: time="2025-12-12T17:28:43.950047748Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:43.963352 containerd[1901]: time="2025-12-12T17:28:43.962757860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:43.964089 containerd[1901]: time="2025-12-12T17:28:43.964043564Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.07949297s" Dec 12 17:28:43.964248 containerd[1901]: time="2025-12-12T17:28:43.964218656Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 12 17:28:43.965245 containerd[1901]: time="2025-12-12T17:28:43.965189216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 17:28:45.069378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:28:45.074551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:28:45.250582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346870963.mount: Deactivated successfully. Dec 12 17:28:45.469583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:28:45.485380 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:28:45.588626 kubelet[2667]: E1212 17:28:45.586893 2667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:28:45.595950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:28:45.596400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:28:45.597191 systemd[1]: kubelet.service: Consumed 362ms CPU time, 105.3M memory peak. Dec 12 17:28:45.856028 containerd[1901]: time="2025-12-12T17:28:45.855544053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:45.858043 containerd[1901]: time="2025-12-12T17:28:45.857944077Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Dec 12 17:28:45.861137 containerd[1901]: time="2025-12-12T17:28:45.861043629Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:45.865909 containerd[1901]: time="2025-12-12T17:28:45.865789353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:45.868470 containerd[1901]: time="2025-12-12T17:28:45.867744213Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.902495285s" Dec 12 17:28:45.868470 containerd[1901]: time="2025-12-12T17:28:45.867834729Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 12 17:28:45.868921 containerd[1901]: time="2025-12-12T17:28:45.868881201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 17:28:46.559520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608121206.mount: Deactivated successfully. Dec 12 17:28:47.928789 containerd[1901]: time="2025-12-12T17:28:47.928701563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:47.932912 containerd[1901]: time="2025-12-12T17:28:47.932815367Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Dec 12 17:28:47.938019 containerd[1901]: time="2025-12-12T17:28:47.937942871Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:47.956358 containerd[1901]: time="2025-12-12T17:28:47.955355340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:47.956869 containerd[1901]: time="2025-12-12T17:28:47.956820504Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.087397887s" Dec 12 17:28:47.957017 containerd[1901]: time="2025-12-12T17:28:47.956983200Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 12 17:28:47.957742 containerd[1901]: time="2025-12-12T17:28:47.957685836Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 17:28:48.492284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180663990.mount: Deactivated successfully. Dec 12 17:28:48.506810 containerd[1901]: time="2025-12-12T17:28:48.506722810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:48.508678 containerd[1901]: time="2025-12-12T17:28:48.508602310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Dec 12 17:28:48.511409 containerd[1901]: time="2025-12-12T17:28:48.511333078Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:48.516364 containerd[1901]: time="2025-12-12T17:28:48.516071626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:48.517888 containerd[1901]: time="2025-12-12T17:28:48.517831474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 559.837646ms" Dec 12 17:28:48.518191 containerd[1901]: time="2025-12-12T17:28:48.518043874Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 12 17:28:48.519061 containerd[1901]: time="2025-12-12T17:28:48.518979538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 17:28:49.130018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375549598.mount: Deactivated successfully. Dec 12 17:28:52.396096 containerd[1901]: time="2025-12-12T17:28:52.396002894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:52.398057 containerd[1901]: time="2025-12-12T17:28:52.397981502Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Dec 12 17:28:52.400580 containerd[1901]: time="2025-12-12T17:28:52.400503926Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:52.406302 containerd[1901]: time="2025-12-12T17:28:52.406223438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:28:52.408863 containerd[1901]: time="2025-12-12T17:28:52.408214478Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.889163456s" Dec 12 17:28:52.408863 containerd[1901]: time="2025-12-12T17:28:52.408277610Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 12 17:28:55.846816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:28:55.851646 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:28:56.215590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:28:56.231100 (kubelet)[2812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:28:56.312355 kubelet[2812]: E1212 17:28:56.311289 2812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:28:56.315717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:28:56.316039 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:28:56.317090 systemd[1]: kubelet.service: Consumed 309ms CPU time, 106.4M memory peak. Dec 12 17:29:00.154010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:00.155258 systemd[1]: kubelet.service: Consumed 309ms CPU time, 106.4M memory peak. Dec 12 17:29:00.162430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:00.217479 systemd[1]: Reload requested from client PID 2826 ('systemctl') (unit session-7.scope)... Dec 12 17:29:00.217514 systemd[1]: Reloading... Dec 12 17:29:00.512360 zram_generator::config[2871]: No configuration found. Dec 12 17:29:01.005548 systemd[1]: Reloading finished in 787 ms. Dec 12 17:29:01.094160 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:29:01.094447 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:29:01.095672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:01.095786 systemd[1]: kubelet.service: Consumed 278ms CPU time, 95M memory peak. Dec 12 17:29:01.099393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:01.507044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:01.531884 (kubelet)[2935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:29:01.607364 kubelet[2935]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:29:01.607364 kubelet[2935]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:29:01.608663 kubelet[2935]: I1212 17:29:01.608581 2935 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:29:01.989919 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 12 17:29:04.410961 kubelet[2935]: I1212 17:29:04.410893 2935 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:29:04.410961 kubelet[2935]: I1212 17:29:04.410944 2935 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:29:04.413523 kubelet[2935]: I1212 17:29:04.413461 2935 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:29:04.413523 kubelet[2935]: I1212 17:29:04.413514 2935 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:29:04.414231 kubelet[2935]: I1212 17:29:04.414178 2935 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:29:04.425421 kubelet[2935]: E1212 17:29:04.425362 2935 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:29:04.427385 kubelet[2935]: I1212 17:29:04.427342 2935 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:29:04.434334 kubelet[2935]: I1212 17:29:04.434274 2935 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:29:04.440374 kubelet[2935]: I1212 17:29:04.439574 2935 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:29:04.440374 kubelet[2935]: I1212 17:29:04.440005 2935 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:29:04.440374 kubelet[2935]: I1212 17:29:04.440042 2935 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:29:04.440374 kubelet[2935]: I1212 17:29:04.440278 2935 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:29:04.440760 kubelet[2935]: I1212 17:29:04.440294 2935 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:29:04.440760 kubelet[2935]: I1212 17:29:04.440486 2935 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:29:04.444908 kubelet[2935]: I1212 17:29:04.444869 2935 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:29:04.447282 kubelet[2935]: I1212 17:29:04.447234 2935 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:29:04.447414 kubelet[2935]: I1212 17:29:04.447334 2935 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:29:04.447414 kubelet[2935]: I1212 17:29:04.447384 2935 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:29:04.447414 kubelet[2935]: I1212 17:29:04.447411 2935 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:29:04.451367 kubelet[2935]: I1212 17:29:04.449909 2935 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:29:04.451367 kubelet[2935]: I1212 17:29:04.450990 2935 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:29:04.451367 kubelet[2935]: I1212 17:29:04.451039 2935 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:29:04.451367 kubelet[2935]: W1212 17:29:04.451109 2935 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:29:04.456709 kubelet[2935]: I1212 17:29:04.456661 2935 server.go:1262] "Started kubelet" Dec 12 17:29:04.457030 kubelet[2935]: E1212 17:29:04.456979 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:29:04.459943 kubelet[2935]: E1212 17:29:04.459900 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-26&limit=500&resourceVersion=0\": dial tcp 172.31.24.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:29:04.460520 kubelet[2935]: I1212 17:29:04.460476 2935 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:29:04.465224 kubelet[2935]: I1212 17:29:04.465110 2935 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:29:04.465398 kubelet[2935]: I1212 17:29:04.465235 2935 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:29:04.465819 kubelet[2935]: I1212 17:29:04.465764 2935 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:29:04.468215 kubelet[2935]: I1212 17:29:04.468087 2935 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:29:04.475015 kubelet[2935]: I1212 17:29:04.474951 2935 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:29:04.476641 kubelet[2935]: E1212 17:29:04.472823 2935 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.26:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-26.188087f5f9c82d4e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-26,UID:ip-172-31-24-26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-26,},FirstTimestamp:2025-12-12 17:29:04.456617294 +0000 UTC m=+2.919405628,LastTimestamp:2025-12-12 17:29:04.456617294 +0000 UTC m=+2.919405628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-26,}" Dec 12 17:29:04.480748 kubelet[2935]: I1212 17:29:04.480692 2935 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:29:04.483750 kubelet[2935]: I1212 17:29:04.483716 2935 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:29:04.484824 kubelet[2935]: E1212 17:29:04.484777 2935 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-26\" not found" Dec 12 17:29:04.488557 kubelet[2935]: I1212 17:29:04.487423 2935 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:29:04.488557 kubelet[2935]: I1212 17:29:04.487713 2935 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:29:04.488557 kubelet[2935]: E1212 17:29:04.487982 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:29:04.488557 kubelet[2935]: E1212 17:29:04.488108 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-26?timeout=10s\": dial tcp 172.31.24.26:6443: connect: connection refused" interval="200ms" Dec 12 17:29:04.489787 kubelet[2935]: I1212 17:29:04.489748 2935 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:29:04.490192 kubelet[2935]: I1212 17:29:04.490155 2935 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:29:04.492963 kubelet[2935]: E1212 17:29:04.492797 2935 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:29:04.495090 kubelet[2935]: I1212 17:29:04.495054 2935 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:29:04.523643 kubelet[2935]: I1212 17:29:04.523582 2935 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:29:04.528348 kubelet[2935]: I1212 17:29:04.528238 2935 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:29:04.528348 kubelet[2935]: I1212 17:29:04.528281 2935 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:29:04.528348 kubelet[2935]: I1212 17:29:04.528342 2935 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:29:04.528541 kubelet[2935]: E1212 17:29:04.528409 2935 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:29:04.534109 kubelet[2935]: E1212 17:29:04.533825 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:29:04.541267 kubelet[2935]: I1212 17:29:04.540835 2935 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:29:04.541267 kubelet[2935]: I1212 17:29:04.540870 2935 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:29:04.541267 kubelet[2935]: I1212 17:29:04.540909 2935 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:29:04.543582 kubelet[2935]: I1212 17:29:04.543518 2935 policy_none.go:49] "None policy: Start" Dec 12 17:29:04.543582 kubelet[2935]: I1212 17:29:04.543557 2935 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:29:04.543582 kubelet[2935]: I1212 17:29:04.543582 2935 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:29:04.545340 kubelet[2935]: I1212 17:29:04.545280 2935 policy_none.go:47] "Start" Dec 12 17:29:04.554765 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:29:04.577644 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:29:04.585600 kubelet[2935]: E1212 17:29:04.585486 2935 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-26\" not found" Dec 12 17:29:04.586634 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:29:04.607795 kubelet[2935]: E1212 17:29:04.607749 2935 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:29:04.609584 kubelet[2935]: I1212 17:29:04.609551 2935 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:29:04.609770 kubelet[2935]: I1212 17:29:04.609722 2935 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:29:04.610560 kubelet[2935]: I1212 17:29:04.610531 2935 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:29:04.613961 kubelet[2935]: E1212 17:29:04.613912 2935 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:29:04.614132 kubelet[2935]: E1212 17:29:04.614011 2935 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-26\" not found" Dec 12 17:29:04.650627 systemd[1]: Created slice kubepods-burstable-podaa80954f089034eae32caf614c2110cc.slice - libcontainer container kubepods-burstable-podaa80954f089034eae32caf614c2110cc.slice. Dec 12 17:29:04.675379 kubelet[2935]: E1212 17:29:04.673807 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-26\" not found" node="ip-172-31-24-26" Dec 12 17:29:04.684371 systemd[1]: Created slice kubepods-burstable-pod3c965744df70f81bfc03d84579a98483.slice - libcontainer container kubepods-burstable-pod3c965744df70f81bfc03d84579a98483.slice. Dec 12 17:29:04.688913 kubelet[2935]: E1212 17:29:04.688871 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-26\" not found" node="ip-172-31-24-26" Dec 12 17:29:04.689810 kubelet[2935]: I1212 17:29:04.689526 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:04.690023 kubelet[2935]: I1212 17:29:04.689984 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c965744df70f81bfc03d84579a98483-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-26\" (UID: \"3c965744df70f81bfc03d84579a98483\") " pod="kube-system/kube-scheduler-ip-172-31-24-26" Dec 12 17:29:04.690161 kubelet[2935]: I1212 17:29:04.690136 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/384ad097fc073576937dce7f2902a49f-ca-certs\") pod \"kube-apiserver-ip-172-31-24-26\" (UID: \"384ad097fc073576937dce7f2902a49f\") " pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:04.690383 kubelet[2935]: I1212 17:29:04.690357 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:04.690545 kubelet[2935]: I1212 17:29:04.690519 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:04.690687 kubelet[2935]: I1212 17:29:04.690663 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:04.690839 kubelet[2935]: I1212 17:29:04.690816 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/384ad097fc073576937dce7f2902a49f-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-26\" (UID: \"384ad097fc073576937dce7f2902a49f\") " pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:04.691024 kubelet[2935]: I1212 17:29:04.690907 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/384ad097fc073576937dce7f2902a49f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-26\" (UID: \"384ad097fc073576937dce7f2902a49f\") " pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:04.691184 kubelet[2935]: I1212 17:29:04.691112 2935 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:04.691267 kubelet[2935]: E1212 17:29:04.691202 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-26?timeout=10s\": dial tcp 172.31.24.26:6443: connect: connection refused" interval="400ms" Dec 12 17:29:04.694916 systemd[1]: Created slice kubepods-burstable-pod384ad097fc073576937dce7f2902a49f.slice - libcontainer container kubepods-burstable-pod384ad097fc073576937dce7f2902a49f.slice. Dec 12 17:29:04.699211 kubelet[2935]: E1212 17:29:04.699172 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-26\" not found" node="ip-172-31-24-26" Dec 12 17:29:04.712128 kubelet[2935]: I1212 17:29:04.712094 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-26" Dec 12 17:29:04.713224 kubelet[2935]: E1212 17:29:04.713162 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.26:6443/api/v1/nodes\": dial tcp 172.31.24.26:6443: connect: connection refused" node="ip-172-31-24-26" Dec 12 17:29:04.915614 kubelet[2935]: I1212 17:29:04.915580 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-26" Dec 12 17:29:04.916374 kubelet[2935]: E1212 17:29:04.916309 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.26:6443/api/v1/nodes\": dial tcp 172.31.24.26:6443: connect: connection refused" node="ip-172-31-24-26" Dec 12 17:29:04.979750 containerd[1901]: time="2025-12-12T17:29:04.978458008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-26,Uid:aa80954f089034eae32caf614c2110cc,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:04.995383 containerd[1901]: time="2025-12-12T17:29:04.995182564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-26,Uid:3c965744df70f81bfc03d84579a98483,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:05.003016 containerd[1901]: time="2025-12-12T17:29:05.002574288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-26,Uid:384ad097fc073576937dce7f2902a49f,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:05.092833 kubelet[2935]: E1212 17:29:05.092774 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-26?timeout=10s\": dial tcp 172.31.24.26:6443: connect: connection refused" interval="800ms" Dec 12 17:29:05.320088 kubelet[2935]: I1212 17:29:05.319590 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-26" Dec 12 17:29:05.320088 kubelet[2935]: E1212 17:29:05.320014 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.26:6443/api/v1/nodes\": dial tcp 172.31.24.26:6443: connect: connection refused" node="ip-172-31-24-26" Dec 12 17:29:05.478283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2528207318.mount: Deactivated successfully. Dec 12 17:29:05.490609 kubelet[2935]: E1212 17:29:05.490543 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:29:05.492844 containerd[1901]: time="2025-12-12T17:29:05.492328191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:05.498832 containerd[1901]: time="2025-12-12T17:29:05.498782643Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 12 17:29:05.505681 containerd[1901]: time="2025-12-12T17:29:05.505606935Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:05.507306 containerd[1901]: time="2025-12-12T17:29:05.507228303Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:05.509345 containerd[1901]: time="2025-12-12T17:29:05.509276871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:29:05.514403 containerd[1901]: time="2025-12-12T17:29:05.514161087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:05.515782 containerd[1901]: time="2025-12-12T17:29:05.515711619Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 533.875827ms" Dec 12 17:29:05.517494 containerd[1901]: time="2025-12-12T17:29:05.517417167Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:29:05.518181 containerd[1901]: time="2025-12-12T17:29:05.518029395Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:29:05.524919 containerd[1901]: time="2025-12-12T17:29:05.524860959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 520.785603ms" Dec 12 17:29:05.531292 containerd[1901]: time="2025-12-12T17:29:05.531211875Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 534.538083ms" Dec 12 17:29:05.600499 containerd[1901]: time="2025-12-12T17:29:05.599814807Z" level=info msg="connecting to shim c228c0a691660d1f0c92ffffe7ca1964fd082d8ad2e881373f5ad827c7c08b7f" address="unix:///run/containerd/s/c7b679410c9947bbf179b637f21409936b303b2642d519db5bcb52d863771be2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:05.606627 containerd[1901]: time="2025-12-12T17:29:05.606549123Z" level=info msg="connecting to shim 9c3792762efbeb759cdadb6ec8b23a6afd09bc4c2734f21e9454e39b49b2bc46" address="unix:///run/containerd/s/0cc4e05d3534dcb316cc77cfcd6d510aaa8ce38c4bbe2c9b53c555781b118205" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:05.610639 containerd[1901]: time="2025-12-12T17:29:05.610579695Z" level=info msg="connecting to shim e07ccce38dce1ad32467387682d4f7ca32fe3c8646456047ae88e41582e0cc89" address="unix:///run/containerd/s/a6983da63ee4e4604582ddfa0220a65a49f91449729de063292cbd0eb7730c8d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:05.650277 kubelet[2935]: E1212 17:29:05.650212 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-26&limit=500&resourceVersion=0\": dial tcp 172.31.24.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:29:05.675681 systemd[1]: Started cri-containerd-c228c0a691660d1f0c92ffffe7ca1964fd082d8ad2e881373f5ad827c7c08b7f.scope - libcontainer container c228c0a691660d1f0c92ffffe7ca1964fd082d8ad2e881373f5ad827c7c08b7f. Dec 12 17:29:05.689761 systemd[1]: Started cri-containerd-9c3792762efbeb759cdadb6ec8b23a6afd09bc4c2734f21e9454e39b49b2bc46.scope - libcontainer container 9c3792762efbeb759cdadb6ec8b23a6afd09bc4c2734f21e9454e39b49b2bc46. Dec 12 17:29:05.705055 systemd[1]: Started cri-containerd-e07ccce38dce1ad32467387682d4f7ca32fe3c8646456047ae88e41582e0cc89.scope - libcontainer container e07ccce38dce1ad32467387682d4f7ca32fe3c8646456047ae88e41582e0cc89. Dec 12 17:29:05.717740 kubelet[2935]: E1212 17:29:05.717469 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:29:05.732814 kubelet[2935]: E1212 17:29:05.732696 2935 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:29:05.847365 containerd[1901]: time="2025-12-12T17:29:05.846084856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-26,Uid:aa80954f089034eae32caf614c2110cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c228c0a691660d1f0c92ffffe7ca1964fd082d8ad2e881373f5ad827c7c08b7f\"" Dec 12 17:29:05.861956 containerd[1901]: time="2025-12-12T17:29:05.861194656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-26,Uid:384ad097fc073576937dce7f2902a49f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e07ccce38dce1ad32467387682d4f7ca32fe3c8646456047ae88e41582e0cc89\"" Dec 12 17:29:05.866416 containerd[1901]: time="2025-12-12T17:29:05.864806705Z" level=info msg="CreateContainer within sandbox \"c228c0a691660d1f0c92ffffe7ca1964fd082d8ad2e881373f5ad827c7c08b7f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:29:05.873355 containerd[1901]: time="2025-12-12T17:29:05.873145553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-26,Uid:3c965744df70f81bfc03d84579a98483,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c3792762efbeb759cdadb6ec8b23a6afd09bc4c2734f21e9454e39b49b2bc46\"" Dec 12 17:29:05.880875 containerd[1901]: time="2025-12-12T17:29:05.880825277Z" level=info msg="CreateContainer within sandbox \"e07ccce38dce1ad32467387682d4f7ca32fe3c8646456047ae88e41582e0cc89\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:29:05.884610 containerd[1901]: time="2025-12-12T17:29:05.884483225Z" level=info msg="CreateContainer within sandbox \"9c3792762efbeb759cdadb6ec8b23a6afd09bc4c2734f21e9454e39b49b2bc46\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:29:05.893893 kubelet[2935]: E1212 17:29:05.893833 2935 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-26?timeout=10s\": dial tcp 172.31.24.26:6443: connect: connection refused" interval="1.6s" Dec 12 17:29:05.897852 containerd[1901]: time="2025-12-12T17:29:05.897011009Z" level=info msg="Container f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:05.903101 containerd[1901]: time="2025-12-12T17:29:05.903051533Z" level=info msg="Container a3a7abd3fbbbe2db17e50045d8176b66f9fc3895a994c233ea75f03c7f422d63: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:05.917282 containerd[1901]: time="2025-12-12T17:29:05.917230145Z" level=info msg="CreateContainer within sandbox \"c228c0a691660d1f0c92ffffe7ca1964fd082d8ad2e881373f5ad827c7c08b7f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4\"" Dec 12 17:29:05.918924 containerd[1901]: time="2025-12-12T17:29:05.918880073Z" level=info msg="StartContainer for \"f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4\"" Dec 12 17:29:05.922109 containerd[1901]: time="2025-12-12T17:29:05.922051505Z" level=info msg="connecting to shim f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4" address="unix:///run/containerd/s/c7b679410c9947bbf179b637f21409936b303b2642d519db5bcb52d863771be2" protocol=ttrpc version=3 Dec 12 17:29:05.928375 containerd[1901]: time="2025-12-12T17:29:05.928031633Z" level=info msg="Container b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:05.934143 containerd[1901]: time="2025-12-12T17:29:05.934070945Z" level=info msg="CreateContainer within sandbox \"e07ccce38dce1ad32467387682d4f7ca32fe3c8646456047ae88e41582e0cc89\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a3a7abd3fbbbe2db17e50045d8176b66f9fc3895a994c233ea75f03c7f422d63\"" Dec 12 17:29:05.935598 containerd[1901]: time="2025-12-12T17:29:05.935465609Z" level=info msg="StartContainer for \"a3a7abd3fbbbe2db17e50045d8176b66f9fc3895a994c233ea75f03c7f422d63\"" Dec 12 17:29:05.942610 containerd[1901]: time="2025-12-12T17:29:05.942407213Z" level=info msg="connecting to shim a3a7abd3fbbbe2db17e50045d8176b66f9fc3895a994c233ea75f03c7f422d63" address="unix:///run/containerd/s/a6983da63ee4e4604582ddfa0220a65a49f91449729de063292cbd0eb7730c8d" protocol=ttrpc version=3 Dec 12 17:29:05.949249 containerd[1901]: time="2025-12-12T17:29:05.948844601Z" level=info msg="CreateContainer within sandbox \"9c3792762efbeb759cdadb6ec8b23a6afd09bc4c2734f21e9454e39b49b2bc46\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865\"" Dec 12 17:29:05.951886 containerd[1901]: time="2025-12-12T17:29:05.951365669Z" level=info msg="StartContainer for \"b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865\"" Dec 12 17:29:05.956158 containerd[1901]: time="2025-12-12T17:29:05.956087273Z" level=info msg="connecting to shim b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865" address="unix:///run/containerd/s/0cc4e05d3534dcb316cc77cfcd6d510aaa8ce38c4bbe2c9b53c555781b118205" protocol=ttrpc version=3 Dec 12 17:29:05.965613 systemd[1]: Started cri-containerd-f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4.scope - libcontainer container f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4. Dec 12 17:29:06.008690 systemd[1]: Started cri-containerd-a3a7abd3fbbbe2db17e50045d8176b66f9fc3895a994c233ea75f03c7f422d63.scope - libcontainer container a3a7abd3fbbbe2db17e50045d8176b66f9fc3895a994c233ea75f03c7f422d63. Dec 12 17:29:06.028652 systemd[1]: Started cri-containerd-b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865.scope - libcontainer container b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865. Dec 12 17:29:06.096876 containerd[1901]: time="2025-12-12T17:29:06.096250430Z" level=info msg="StartContainer for \"f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4\" returns successfully" Dec 12 17:29:06.124478 kubelet[2935]: I1212 17:29:06.124079 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-26" Dec 12 17:29:06.125227 kubelet[2935]: E1212 17:29:06.124573 2935 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.26:6443/api/v1/nodes\": dial tcp 172.31.24.26:6443: connect: connection refused" node="ip-172-31-24-26" Dec 12 17:29:06.177238 containerd[1901]: time="2025-12-12T17:29:06.176975582Z" level=info msg="StartContainer for \"a3a7abd3fbbbe2db17e50045d8176b66f9fc3895a994c233ea75f03c7f422d63\" returns successfully" Dec 12 17:29:06.220481 containerd[1901]: time="2025-12-12T17:29:06.220393694Z" level=info msg="StartContainer for \"b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865\" returns successfully" Dec 12 17:29:06.557177 kubelet[2935]: E1212 17:29:06.557100 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-26\" not found" node="ip-172-31-24-26" Dec 12 17:29:06.565699 kubelet[2935]: E1212 17:29:06.565650 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-26\" not found" node="ip-172-31-24-26" Dec 12 17:29:06.574666 kubelet[2935]: E1212 17:29:06.574618 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-26\" not found" node="ip-172-31-24-26" Dec 12 17:29:07.576003 kubelet[2935]: E1212 17:29:07.575944 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-26\" not found" node="ip-172-31-24-26" Dec 12 17:29:07.578038 kubelet[2935]: E1212 17:29:07.577994 2935 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-26\" not found" node="ip-172-31-24-26" Dec 12 17:29:07.728602 kubelet[2935]: I1212 17:29:07.728555 2935 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-26" Dec 12 17:29:10.453258 kubelet[2935]: I1212 17:29:10.453196 2935 apiserver.go:52] "Watching apiserver" Dec 12 17:29:10.588116 kubelet[2935]: I1212 17:29:10.588051 2935 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:29:10.723523 kubelet[2935]: E1212 17:29:10.723078 2935 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-26\" not found" node="ip-172-31-24-26" Dec 12 17:29:10.759849 kubelet[2935]: I1212 17:29:10.759788 2935 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-26" Dec 12 17:29:10.785797 kubelet[2935]: I1212 17:29:10.785744 2935 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:10.804198 kubelet[2935]: E1212 17:29:10.804049 2935 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-26.188087f5f9c82d4e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-26,UID:ip-172-31-24-26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-26,},FirstTimestamp:2025-12-12 17:29:04.456617294 +0000 UTC m=+2.919405628,LastTimestamp:2025-12-12 17:29:04.456617294 +0000 UTC m=+2.919405628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-26,}" Dec 12 17:29:10.957608 kubelet[2935]: E1212 17:29:10.957550 2935 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:10.957608 kubelet[2935]: I1212 17:29:10.957600 2935 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-26" Dec 12 17:29:10.977818 kubelet[2935]: E1212 17:29:10.977216 2935 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-26" Dec 12 17:29:10.977818 kubelet[2935]: I1212 17:29:10.977267 2935 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:10.990346 kubelet[2935]: E1212 17:29:10.990264 2935 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-26\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:11.534358 kubelet[2935]: I1212 17:29:11.532004 2935 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:13.361874 kubelet[2935]: I1212 17:29:13.361826 2935 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:14.608752 kubelet[2935]: I1212 17:29:14.608617 2935 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-26" podStartSLOduration=1.608591268 podStartE2EDuration="1.608591268s" podCreationTimestamp="2025-12-12 17:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:14.58625988 +0000 UTC m=+13.049048202" watchObservedRunningTime="2025-12-12 17:29:14.608591268 +0000 UTC m=+13.071379566" Dec 12 17:29:14.849090 systemd[1]: Reload requested from client PID 3225 ('systemctl') (unit session-7.scope)... Dec 12 17:29:14.849130 systemd[1]: Reloading... Dec 12 17:29:15.115358 zram_generator::config[3272]: No configuration found. Dec 12 17:29:15.193634 update_engine[1875]: I20251212 17:29:15.193450 1875 update_attempter.cc:509] Updating boot flags... Dec 12 17:29:15.896753 systemd[1]: Reloading finished in 1046 ms. Dec 12 17:29:16.121559 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:16.155464 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:29:16.156174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:16.156265 systemd[1]: kubelet.service: Consumed 3.824s CPU time, 121.7M memory peak. Dec 12 17:29:16.161841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:29:16.808551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:29:16.826930 (kubelet)[3599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:29:16.982661 kubelet[3599]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:29:16.983362 kubelet[3599]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:29:16.983362 kubelet[3599]: I1212 17:29:16.983227 3599 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:29:17.014259 kubelet[3599]: I1212 17:29:17.014192 3599 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:29:17.014259 kubelet[3599]: I1212 17:29:17.014243 3599 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:29:17.014511 kubelet[3599]: I1212 17:29:17.014302 3599 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:29:17.014511 kubelet[3599]: I1212 17:29:17.014348 3599 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:29:17.017887 kubelet[3599]: I1212 17:29:17.015575 3599 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:29:17.021609 kubelet[3599]: I1212 17:29:17.021545 3599 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:29:17.028240 kubelet[3599]: I1212 17:29:17.028197 3599 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:29:17.039297 kubelet[3599]: I1212 17:29:17.039215 3599 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:29:17.050234 kubelet[3599]: I1212 17:29:17.050185 3599 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:29:17.050806 kubelet[3599]: I1212 17:29:17.050748 3599 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:29:17.051093 kubelet[3599]: I1212 17:29:17.050803 3599 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:29:17.051093 kubelet[3599]: I1212 17:29:17.051085 3599 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:29:17.051299 kubelet[3599]: I1212 17:29:17.051106 3599 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:29:17.051299 kubelet[3599]: I1212 17:29:17.051153 3599 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:29:17.057395 kubelet[3599]: I1212 17:29:17.056283 3599 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:29:17.057395 kubelet[3599]: I1212 17:29:17.056766 3599 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:29:17.057855 kubelet[3599]: I1212 17:29:17.057804 3599 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:29:17.057932 kubelet[3599]: I1212 17:29:17.057907 3599 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:29:17.057981 kubelet[3599]: I1212 17:29:17.057950 3599 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:29:17.068114 kubelet[3599]: I1212 17:29:17.067966 3599 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:29:17.071373 kubelet[3599]: I1212 17:29:17.070198 3599 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:29:17.071373 kubelet[3599]: I1212 17:29:17.070269 3599 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:29:17.087040 kubelet[3599]: I1212 17:29:17.086903 3599 server.go:1262] "Started kubelet" Dec 12 17:29:17.098056 kubelet[3599]: I1212 17:29:17.097998 3599 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:29:17.112179 kubelet[3599]: I1212 17:29:17.112086 3599 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:29:17.118722 kubelet[3599]: I1212 17:29:17.118620 3599 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:29:17.145944 kubelet[3599]: I1212 17:29:17.145834 3599 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:29:17.146094 kubelet[3599]: I1212 17:29:17.145965 3599 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:29:17.147773 kubelet[3599]: I1212 17:29:17.146278 3599 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:29:17.151380 kubelet[3599]: I1212 17:29:17.150766 3599 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:29:17.175913 kubelet[3599]: I1212 17:29:17.175862 3599 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:29:17.176599 kubelet[3599]: E1212 17:29:17.176545 3599 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-26\" not found" Dec 12 17:29:17.185907 kubelet[3599]: I1212 17:29:17.178400 3599 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:29:17.185907 kubelet[3599]: I1212 17:29:17.178614 3599 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:29:17.194820 kubelet[3599]: I1212 17:29:17.194742 3599 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:29:17.205388 kubelet[3599]: I1212 17:29:17.204906 3599 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:29:17.226878 kubelet[3599]: E1212 17:29:17.225310 3599 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:29:17.231560 kubelet[3599]: I1212 17:29:17.231507 3599 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:29:17.231560 kubelet[3599]: I1212 17:29:17.231550 3599 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:29:17.245398 kubelet[3599]: I1212 17:29:17.245221 3599 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:29:17.245398 kubelet[3599]: I1212 17:29:17.245264 3599 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:29:17.245982 kubelet[3599]: I1212 17:29:17.245306 3599 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:29:17.246097 kubelet[3599]: E1212 17:29:17.246041 3599 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:29:17.348420 kubelet[3599]: E1212 17:29:17.348063 3599 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:29:17.359673 kubelet[3599]: I1212 17:29:17.359639 3599 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:29:17.362840 kubelet[3599]: I1212 17:29:17.360526 3599 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:29:17.362840 kubelet[3599]: I1212 17:29:17.361363 3599 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:29:17.362840 kubelet[3599]: I1212 17:29:17.361607 3599 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:29:17.362840 kubelet[3599]: I1212 17:29:17.361629 3599 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:29:17.362840 kubelet[3599]: I1212 17:29:17.361659 3599 policy_none.go:49] "None policy: Start" Dec 12 17:29:17.362840 kubelet[3599]: I1212 17:29:17.361679 3599 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:29:17.362840 kubelet[3599]: I1212 17:29:17.361699 3599 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:29:17.362840 kubelet[3599]: I1212 17:29:17.361879 3599 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 17:29:17.362840 kubelet[3599]: I1212 17:29:17.361896 3599 policy_none.go:47] "Start" Dec 12 17:29:17.387745 kubelet[3599]: E1212 17:29:17.387692 3599 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:29:17.388035 kubelet[3599]: I1212 17:29:17.387988 3599 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:29:17.388105 kubelet[3599]: I1212 17:29:17.388023 3599 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:29:17.388986 kubelet[3599]: I1212 17:29:17.388956 3599 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:29:17.392527 kubelet[3599]: E1212 17:29:17.392489 3599 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:29:17.516676 kubelet[3599]: I1212 17:29:17.515402 3599 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-26" Dec 12 17:29:17.536383 kubelet[3599]: I1212 17:29:17.535386 3599 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-26" Dec 12 17:29:17.536654 kubelet[3599]: I1212 17:29:17.536628 3599 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-26" Dec 12 17:29:17.551365 kubelet[3599]: I1212 17:29:17.551292 3599 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:17.553936 kubelet[3599]: I1212 17:29:17.552301 3599 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:17.554473 kubelet[3599]: I1212 17:29:17.554443 3599 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-26" Dec 12 17:29:17.574230 kubelet[3599]: E1212 17:29:17.574187 3599 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-26\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:17.578295 kubelet[3599]: E1212 17:29:17.578232 3599 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-26\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:17.589379 kubelet[3599]: I1212 17:29:17.589068 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:17.589379 kubelet[3599]: I1212 17:29:17.589135 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:17.589379 kubelet[3599]: I1212 17:29:17.589177 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/384ad097fc073576937dce7f2902a49f-ca-certs\") pod \"kube-apiserver-ip-172-31-24-26\" (UID: \"384ad097fc073576937dce7f2902a49f\") " pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:17.589379 kubelet[3599]: I1212 17:29:17.589213 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:17.589379 kubelet[3599]: I1212 17:29:17.589256 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:17.591049 kubelet[3599]: I1212 17:29:17.589311 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa80954f089034eae32caf614c2110cc-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-26\" (UID: \"aa80954f089034eae32caf614c2110cc\") " pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:17.591049 kubelet[3599]: I1212 17:29:17.589659 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3c965744df70f81bfc03d84579a98483-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-26\" (UID: \"3c965744df70f81bfc03d84579a98483\") " pod="kube-system/kube-scheduler-ip-172-31-24-26" Dec 12 17:29:17.591049 kubelet[3599]: I1212 17:29:17.589727 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/384ad097fc073576937dce7f2902a49f-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-26\" (UID: \"384ad097fc073576937dce7f2902a49f\") " pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:17.591049 kubelet[3599]: I1212 17:29:17.589814 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/384ad097fc073576937dce7f2902a49f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-26\" (UID: \"384ad097fc073576937dce7f2902a49f\") " pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:17.996600 kubelet[3599]: I1212 17:29:17.996529 3599 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:29:17.998147 containerd[1901]: time="2025-12-12T17:29:17.998049197Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:29:17.999723 kubelet[3599]: I1212 17:29:17.999679 3599 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:29:18.062961 kubelet[3599]: I1212 17:29:18.062900 3599 apiserver.go:52] "Watching apiserver" Dec 12 17:29:18.086831 kubelet[3599]: I1212 17:29:18.086774 3599 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:29:18.333543 kubelet[3599]: I1212 17:29:18.333483 3599 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:18.334494 kubelet[3599]: I1212 17:29:18.334443 3599 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:18.349920 kubelet[3599]: E1212 17:29:18.349855 3599 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-26\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-26" Dec 12 17:29:18.350377 kubelet[3599]: E1212 17:29:18.350305 3599 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-26\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-26" Dec 12 17:29:18.419477 kubelet[3599]: I1212 17:29:18.419348 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-26" podStartSLOduration=1.4192972830000001 podStartE2EDuration="1.419297283s" podCreationTimestamp="2025-12-12 17:29:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:18.418841163 +0000 UTC m=+1.578847593" watchObservedRunningTime="2025-12-12 17:29:18.419297283 +0000 UTC m=+1.579303761" Dec 12 17:29:18.899625 kubelet[3599]: I1212 17:29:18.899549 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e8ac5c2-d79f-4d33-8801-eda9cfb5701f-xtables-lock\") pod \"kube-proxy-bv5tk\" (UID: \"8e8ac5c2-d79f-4d33-8801-eda9cfb5701f\") " pod="kube-system/kube-proxy-bv5tk" Dec 12 17:29:18.899625 kubelet[3599]: I1212 17:29:18.899620 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e8ac5c2-d79f-4d33-8801-eda9cfb5701f-lib-modules\") pod \"kube-proxy-bv5tk\" (UID: \"8e8ac5c2-d79f-4d33-8801-eda9cfb5701f\") " pod="kube-system/kube-proxy-bv5tk" Dec 12 17:29:18.899886 kubelet[3599]: I1212 17:29:18.899668 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e8ac5c2-d79f-4d33-8801-eda9cfb5701f-kube-proxy\") pod \"kube-proxy-bv5tk\" (UID: \"8e8ac5c2-d79f-4d33-8801-eda9cfb5701f\") " pod="kube-system/kube-proxy-bv5tk" Dec 12 17:29:18.899886 kubelet[3599]: I1212 17:29:18.899711 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrkp\" (UniqueName: \"kubernetes.io/projected/8e8ac5c2-d79f-4d33-8801-eda9cfb5701f-kube-api-access-frrkp\") pod \"kube-proxy-bv5tk\" (UID: \"8e8ac5c2-d79f-4d33-8801-eda9cfb5701f\") " pod="kube-system/kube-proxy-bv5tk" Dec 12 17:29:18.909502 kubelet[3599]: E1212 17:29:18.909383 3599 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-bv5tk\" is forbidden: User \"system:node:ip-172-31-24-26\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-26' and this object" podUID="8e8ac5c2-d79f-4d33-8801-eda9cfb5701f" pod="kube-system/kube-proxy-bv5tk" Dec 12 17:29:18.909856 kubelet[3599]: E1212 17:29:18.909779 3599 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-24-26\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-26' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Dec 12 17:29:18.910003 kubelet[3599]: E1212 17:29:18.909936 3599 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-24-26\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-26' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Dec 12 17:29:18.916589 systemd[1]: Created slice kubepods-besteffort-pod8e8ac5c2_d79f_4d33_8801_eda9cfb5701f.slice - libcontainer container kubepods-besteffort-pod8e8ac5c2_d79f_4d33_8801_eda9cfb5701f.slice. Dec 12 17:29:19.180214 systemd[1]: Created slice kubepods-besteffort-pode5ecd13c_d5f8_48d6_a8bf_2462b955ef30.slice - libcontainer container kubepods-besteffort-pode5ecd13c_d5f8_48d6_a8bf_2462b955ef30.slice. Dec 12 17:29:19.202089 kubelet[3599]: I1212 17:29:19.201742 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2ss\" (UniqueName: \"kubernetes.io/projected/e5ecd13c-d5f8-48d6-a8bf-2462b955ef30-kube-api-access-qm2ss\") pod \"tigera-operator-65cdcdfd6d-qsgtb\" (UID: \"e5ecd13c-d5f8-48d6-a8bf-2462b955ef30\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-qsgtb" Dec 12 17:29:19.203987 kubelet[3599]: I1212 17:29:19.203391 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e5ecd13c-d5f8-48d6-a8bf-2462b955ef30-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-qsgtb\" (UID: \"e5ecd13c-d5f8-48d6-a8bf-2462b955ef30\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-qsgtb" Dec 12 17:29:19.497458 containerd[1901]: time="2025-12-12T17:29:19.497106904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-qsgtb,Uid:e5ecd13c-d5f8-48d6-a8bf-2462b955ef30,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:29:19.544364 containerd[1901]: time="2025-12-12T17:29:19.544254664Z" level=info msg="connecting to shim a7ea4c4de742ef3b7f8281c40d1902097bcb47f527ee8997f4cf6f845d4d1b7c" address="unix:///run/containerd/s/c7774c9d8b0ec53cc1153dc325e12f66df23b0dd591d624cbc84f3740613d359" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:19.597731 systemd[1]: Started cri-containerd-a7ea4c4de742ef3b7f8281c40d1902097bcb47f527ee8997f4cf6f845d4d1b7c.scope - libcontainer container a7ea4c4de742ef3b7f8281c40d1902097bcb47f527ee8997f4cf6f845d4d1b7c. Dec 12 17:29:19.682291 containerd[1901]: time="2025-12-12T17:29:19.682187645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-qsgtb,Uid:e5ecd13c-d5f8-48d6-a8bf-2462b955ef30,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a7ea4c4de742ef3b7f8281c40d1902097bcb47f527ee8997f4cf6f845d4d1b7c\"" Dec 12 17:29:19.686870 containerd[1901]: time="2025-12-12T17:29:19.686806505Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:29:20.002771 kubelet[3599]: E1212 17:29:20.001297 3599 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 12 17:29:20.002771 kubelet[3599]: E1212 17:29:20.001469 3599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e8ac5c2-d79f-4d33-8801-eda9cfb5701f-kube-proxy podName:8e8ac5c2-d79f-4d33-8801-eda9cfb5701f nodeName:}" failed. No retries permitted until 2025-12-12 17:29:20.501428531 +0000 UTC m=+3.661434937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/8e8ac5c2-d79f-4d33-8801-eda9cfb5701f-kube-proxy") pod "kube-proxy-bv5tk" (UID: "8e8ac5c2-d79f-4d33-8801-eda9cfb5701f") : failed to sync configmap cache: timed out waiting for the condition Dec 12 17:29:20.733004 containerd[1901]: time="2025-12-12T17:29:20.732861666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bv5tk,Uid:8e8ac5c2-d79f-4d33-8801-eda9cfb5701f,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:20.778757 containerd[1901]: time="2025-12-12T17:29:20.778674067Z" level=info msg="connecting to shim d6d2752d24f1ba6fd9ea8e6ff1eb4b3993c4cc8522e437d6ab6928cdb7064e3e" address="unix:///run/containerd/s/72896a699decb51fb173ec8e60c992581f83311f9dd857381b7d739489a22c30" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:20.838715 systemd[1]: Started cri-containerd-d6d2752d24f1ba6fd9ea8e6ff1eb4b3993c4cc8522e437d6ab6928cdb7064e3e.scope - libcontainer container d6d2752d24f1ba6fd9ea8e6ff1eb4b3993c4cc8522e437d6ab6928cdb7064e3e. Dec 12 17:29:20.923835 containerd[1901]: time="2025-12-12T17:29:20.923769547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bv5tk,Uid:8e8ac5c2-d79f-4d33-8801-eda9cfb5701f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6d2752d24f1ba6fd9ea8e6ff1eb4b3993c4cc8522e437d6ab6928cdb7064e3e\"" Dec 12 17:29:20.934983 containerd[1901]: time="2025-12-12T17:29:20.934933675Z" level=info msg="CreateContainer within sandbox \"d6d2752d24f1ba6fd9ea8e6ff1eb4b3993c4cc8522e437d6ab6928cdb7064e3e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:29:20.959425 containerd[1901]: time="2025-12-12T17:29:20.957429799Z" level=info msg="Container 195de46d568aa871bad16eba573c748bd5ec974a60bb25dba3b30d2553ec656f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:20.978041 containerd[1901]: time="2025-12-12T17:29:20.977956280Z" level=info msg="CreateContainer within sandbox \"d6d2752d24f1ba6fd9ea8e6ff1eb4b3993c4cc8522e437d6ab6928cdb7064e3e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"195de46d568aa871bad16eba573c748bd5ec974a60bb25dba3b30d2553ec656f\"" Dec 12 17:29:20.980098 containerd[1901]: time="2025-12-12T17:29:20.980000648Z" level=info msg="StartContainer for \"195de46d568aa871bad16eba573c748bd5ec974a60bb25dba3b30d2553ec656f\"" Dec 12 17:29:20.986657 containerd[1901]: time="2025-12-12T17:29:20.986393336Z" level=info msg="connecting to shim 195de46d568aa871bad16eba573c748bd5ec974a60bb25dba3b30d2553ec656f" address="unix:///run/containerd/s/72896a699decb51fb173ec8e60c992581f83311f9dd857381b7d739489a22c30" protocol=ttrpc version=3 Dec 12 17:29:21.030881 systemd[1]: Started cri-containerd-195de46d568aa871bad16eba573c748bd5ec974a60bb25dba3b30d2553ec656f.scope - libcontainer container 195de46d568aa871bad16eba573c748bd5ec974a60bb25dba3b30d2553ec656f. Dec 12 17:29:21.198484 containerd[1901]: time="2025-12-12T17:29:21.198264725Z" level=info msg="StartContainer for \"195de46d568aa871bad16eba573c748bd5ec974a60bb25dba3b30d2553ec656f\" returns successfully" Dec 12 17:29:22.488289 kubelet[3599]: I1212 17:29:22.488129 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bv5tk" podStartSLOduration=4.488108911 podStartE2EDuration="4.488108911s" podCreationTimestamp="2025-12-12 17:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:29:21.40625787 +0000 UTC m=+4.566264312" watchObservedRunningTime="2025-12-12 17:29:22.488108911 +0000 UTC m=+5.648115329" Dec 12 17:29:22.847720 containerd[1901]: time="2025-12-12T17:29:22.847607481Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:22.849940 containerd[1901]: time="2025-12-12T17:29:22.849756297Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 12 17:29:22.852990 containerd[1901]: time="2025-12-12T17:29:22.852525093Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:22.868387 containerd[1901]: time="2025-12-12T17:29:22.868278873Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:22.870519 containerd[1901]: time="2025-12-12T17:29:22.870379185Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.183505312s" Dec 12 17:29:22.870869 containerd[1901]: time="2025-12-12T17:29:22.870705753Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:29:22.881587 containerd[1901]: time="2025-12-12T17:29:22.881273565Z" level=info msg="CreateContainer within sandbox \"a7ea4c4de742ef3b7f8281c40d1902097bcb47f527ee8997f4cf6f845d4d1b7c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:29:22.901288 containerd[1901]: time="2025-12-12T17:29:22.899714589Z" level=info msg="Container abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:22.914428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4240647921.mount: Deactivated successfully. Dec 12 17:29:22.922953 containerd[1901]: time="2025-12-12T17:29:22.922857069Z" level=info msg="CreateContainer within sandbox \"a7ea4c4de742ef3b7f8281c40d1902097bcb47f527ee8997f4cf6f845d4d1b7c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c\"" Dec 12 17:29:22.924467 containerd[1901]: time="2025-12-12T17:29:22.924383745Z" level=info msg="StartContainer for \"abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c\"" Dec 12 17:29:22.926782 containerd[1901]: time="2025-12-12T17:29:22.926655381Z" level=info msg="connecting to shim abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c" address="unix:///run/containerd/s/c7774c9d8b0ec53cc1153dc325e12f66df23b0dd591d624cbc84f3740613d359" protocol=ttrpc version=3 Dec 12 17:29:22.973644 systemd[1]: Started cri-containerd-abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c.scope - libcontainer container abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c. Dec 12 17:29:23.041407 containerd[1901]: time="2025-12-12T17:29:23.041261814Z" level=info msg="StartContainer for \"abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c\" returns successfully" Dec 12 17:29:24.800366 kubelet[3599]: I1212 17:29:24.800194 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-qsgtb" podStartSLOduration=2.613755979 podStartE2EDuration="5.800171387s" podCreationTimestamp="2025-12-12 17:29:19 +0000 UTC" firstStartedPulling="2025-12-12 17:29:19.686129225 +0000 UTC m=+2.846135679" lastFinishedPulling="2025-12-12 17:29:22.872544681 +0000 UTC m=+6.032551087" observedRunningTime="2025-12-12 17:29:23.393118232 +0000 UTC m=+6.553124686" watchObservedRunningTime="2025-12-12 17:29:24.800171387 +0000 UTC m=+7.960177817" Dec 12 17:29:32.002085 sudo[2338]: pam_unix(sudo:session): session closed for user root Dec 12 17:29:32.026111 sshd[2337]: Connection closed by 147.75.109.163 port 54470 Dec 12 17:29:32.027155 sshd-session[2334]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:32.038949 systemd[1]: sshd@6-172.31.24.26:22-147.75.109.163:54470.service: Deactivated successfully. Dec 12 17:29:32.047177 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:29:32.048237 systemd[1]: session-7.scope: Consumed 11.759s CPU time, 225.3M memory peak. Dec 12 17:29:32.052814 systemd-logind[1874]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:29:32.059557 systemd-logind[1874]: Removed session 7. Dec 12 17:29:48.770953 systemd[1]: Created slice kubepods-besteffort-pod271675fb_b761_4f89_a7bd_27ce30c7f80a.slice - libcontainer container kubepods-besteffort-pod271675fb_b761_4f89_a7bd_27ce30c7f80a.slice. Dec 12 17:29:48.827871 kubelet[3599]: I1212 17:29:48.827645 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/271675fb-b761-4f89-a7bd-27ce30c7f80a-tigera-ca-bundle\") pod \"calico-typha-7586b756b8-7qkn6\" (UID: \"271675fb-b761-4f89-a7bd-27ce30c7f80a\") " pod="calico-system/calico-typha-7586b756b8-7qkn6" Dec 12 17:29:48.829489 kubelet[3599]: I1212 17:29:48.829053 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/271675fb-b761-4f89-a7bd-27ce30c7f80a-typha-certs\") pod \"calico-typha-7586b756b8-7qkn6\" (UID: \"271675fb-b761-4f89-a7bd-27ce30c7f80a\") " pod="calico-system/calico-typha-7586b756b8-7qkn6" Dec 12 17:29:48.829489 kubelet[3599]: I1212 17:29:48.829143 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbvxh\" (UniqueName: \"kubernetes.io/projected/271675fb-b761-4f89-a7bd-27ce30c7f80a-kube-api-access-rbvxh\") pod \"calico-typha-7586b756b8-7qkn6\" (UID: \"271675fb-b761-4f89-a7bd-27ce30c7f80a\") " pod="calico-system/calico-typha-7586b756b8-7qkn6" Dec 12 17:29:48.974297 systemd[1]: Created slice kubepods-besteffort-pode5b843a2_3da0_42e0_9eb8_48bbfe64cd4d.slice - libcontainer container kubepods-besteffort-pode5b843a2_3da0_42e0_9eb8_48bbfe64cd4d.slice. Dec 12 17:29:49.030357 kubelet[3599]: I1212 17:29:49.030141 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9fbz\" (UniqueName: \"kubernetes.io/projected/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-kube-api-access-v9fbz\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.030357 kubelet[3599]: I1212 17:29:49.030235 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-var-run-calico\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.030357 kubelet[3599]: I1212 17:29:49.030282 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-cni-bin-dir\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.031895 kubelet[3599]: I1212 17:29:49.031451 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-lib-modules\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.031895 kubelet[3599]: I1212 17:29:49.031636 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-var-lib-calico\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.031895 kubelet[3599]: I1212 17:29:49.031687 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-flexvol-driver-host\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.031895 kubelet[3599]: I1212 17:29:49.031733 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-policysync\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.031895 kubelet[3599]: I1212 17:29:49.031785 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-cni-net-dir\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.032300 kubelet[3599]: I1212 17:29:49.031866 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-node-certs\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.032300 kubelet[3599]: I1212 17:29:49.031913 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-tigera-ca-bundle\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.032300 kubelet[3599]: I1212 17:29:49.031963 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-cni-log-dir\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.032300 kubelet[3599]: I1212 17:29:49.032016 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d-xtables-lock\") pod \"calico-node-sjkq7\" (UID: \"e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d\") " pod="calico-system/calico-node-sjkq7" Dec 12 17:29:49.088067 containerd[1901]: time="2025-12-12T17:29:49.087846571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7586b756b8-7qkn6,Uid:271675fb-b761-4f89-a7bd-27ce30c7f80a,Namespace:calico-system,Attempt:0,}" Dec 12 17:29:49.091914 kubelet[3599]: E1212 17:29:49.090647 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:29:49.133632 kubelet[3599]: I1212 17:29:49.133559 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dfaeee63-32d9-4902-9d2a-576429123236-socket-dir\") pod \"csi-node-driver-ljkxc\" (UID: \"dfaeee63-32d9-4902-9d2a-576429123236\") " pod="calico-system/csi-node-driver-ljkxc" Dec 12 17:29:49.133809 kubelet[3599]: I1212 17:29:49.133687 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dfaeee63-32d9-4902-9d2a-576429123236-registration-dir\") pod \"csi-node-driver-ljkxc\" (UID: \"dfaeee63-32d9-4902-9d2a-576429123236\") " pod="calico-system/csi-node-driver-ljkxc" Dec 12 17:29:49.133809 kubelet[3599]: I1212 17:29:49.133768 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dfaeee63-32d9-4902-9d2a-576429123236-varrun\") pod \"csi-node-driver-ljkxc\" (UID: \"dfaeee63-32d9-4902-9d2a-576429123236\") " pod="calico-system/csi-node-driver-ljkxc" Dec 12 17:29:49.133954 kubelet[3599]: I1212 17:29:49.133867 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfaeee63-32d9-4902-9d2a-576429123236-kubelet-dir\") pod \"csi-node-driver-ljkxc\" (UID: \"dfaeee63-32d9-4902-9d2a-576429123236\") " pod="calico-system/csi-node-driver-ljkxc" Dec 12 17:29:49.133954 kubelet[3599]: I1212 17:29:49.133936 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xcnm\" (UniqueName: \"kubernetes.io/projected/dfaeee63-32d9-4902-9d2a-576429123236-kube-api-access-5xcnm\") pod \"csi-node-driver-ljkxc\" (UID: \"dfaeee63-32d9-4902-9d2a-576429123236\") " pod="calico-system/csi-node-driver-ljkxc" Dec 12 17:29:49.182562 kubelet[3599]: E1212 17:29:49.182162 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.182562 kubelet[3599]: W1212 17:29:49.182219 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.182562 kubelet[3599]: E1212 17:29:49.182257 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.184673 kubelet[3599]: E1212 17:29:49.184533 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.184673 kubelet[3599]: W1212 17:29:49.184595 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.184673 kubelet[3599]: E1212 17:29:49.184631 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.184927 containerd[1901]: time="2025-12-12T17:29:49.184784588Z" level=info msg="connecting to shim af123ca2a30ea007d22cce47235ada56fe4adeb867cbd01b6b1316e15f0170b7" address="unix:///run/containerd/s/11ba1f74c50653a67fffa61d6bb10909d01a396cc02c0acfa8c26661e86f1d7a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:49.239238 kubelet[3599]: E1212 17:29:49.239046 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.240612 kubelet[3599]: W1212 17:29:49.240111 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.240612 kubelet[3599]: E1212 17:29:49.240172 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.240999 kubelet[3599]: E1212 17:29:49.240969 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.241126 kubelet[3599]: W1212 17:29:49.241098 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.241255 kubelet[3599]: E1212 17:29:49.241230 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.244734 kubelet[3599]: E1212 17:29:49.244387 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.245094 kubelet[3599]: W1212 17:29:49.244925 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.245094 kubelet[3599]: E1212 17:29:49.244978 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.245650 kubelet[3599]: E1212 17:29:49.245617 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.246016 kubelet[3599]: W1212 17:29:49.245782 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.246016 kubelet[3599]: E1212 17:29:49.245824 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.247297 kubelet[3599]: E1212 17:29:49.246445 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.247297 kubelet[3599]: W1212 17:29:49.247503 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.248142 kubelet[3599]: E1212 17:29:49.247976 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.250713 kubelet[3599]: E1212 17:29:49.250382 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.250713 kubelet[3599]: W1212 17:29:49.250648 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.251211 kubelet[3599]: E1212 17:29:49.251008 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.253112 kubelet[3599]: E1212 17:29:49.253045 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.253573 kubelet[3599]: W1212 17:29:49.253200 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.253573 kubelet[3599]: E1212 17:29:49.253245 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.254510 kubelet[3599]: E1212 17:29:49.254478 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.254960 kubelet[3599]: W1212 17:29:49.254631 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.254960 kubelet[3599]: E1212 17:29:49.254667 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.255869 kubelet[3599]: E1212 17:29:49.255827 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.256475 kubelet[3599]: W1212 17:29:49.256151 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.256935 kubelet[3599]: E1212 17:29:49.256432 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.258292 kubelet[3599]: E1212 17:29:49.258252 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.258938 kubelet[3599]: W1212 17:29:49.258492 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.259360 kubelet[3599]: E1212 17:29:49.258571 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.260694 kubelet[3599]: E1212 17:29:49.260575 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.261483 kubelet[3599]: W1212 17:29:49.261354 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.261868 kubelet[3599]: E1212 17:29:49.261668 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.262898 kubelet[3599]: E1212 17:29:49.262769 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.263450 kubelet[3599]: W1212 17:29:49.263171 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.263450 kubelet[3599]: E1212 17:29:49.263227 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.264529 kubelet[3599]: E1212 17:29:49.264494 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.265301 kubelet[3599]: W1212 17:29:49.264727 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.265301 kubelet[3599]: E1212 17:29:49.265055 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.266139 kubelet[3599]: E1212 17:29:49.266101 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.266627 kubelet[3599]: W1212 17:29:49.266581 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.266627 kubelet[3599]: E1212 17:29:49.266705 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.267595 kubelet[3599]: E1212 17:29:49.267562 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.267774 kubelet[3599]: W1212 17:29:49.267742 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.267902 kubelet[3599]: E1212 17:29:49.267876 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.270730 kubelet[3599]: E1212 17:29:49.270685 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.272390 kubelet[3599]: W1212 17:29:49.271404 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.272390 kubelet[3599]: E1212 17:29:49.271462 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.276087 kubelet[3599]: E1212 17:29:49.276034 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.276605 kubelet[3599]: W1212 17:29:49.276291 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.276605 kubelet[3599]: E1212 17:29:49.276400 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.277102 kubelet[3599]: E1212 17:29:49.277069 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.277387 kubelet[3599]: W1212 17:29:49.277309 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.277648 kubelet[3599]: E1212 17:29:49.277472 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.279975 systemd[1]: Started cri-containerd-af123ca2a30ea007d22cce47235ada56fe4adeb867cbd01b6b1316e15f0170b7.scope - libcontainer container af123ca2a30ea007d22cce47235ada56fe4adeb867cbd01b6b1316e15f0170b7. Dec 12 17:29:49.287961 kubelet[3599]: E1212 17:29:49.287876 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.289508 kubelet[3599]: W1212 17:29:49.289454 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.289615 kubelet[3599]: E1212 17:29:49.289531 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.291368 kubelet[3599]: E1212 17:29:49.290922 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.291368 kubelet[3599]: W1212 17:29:49.290956 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.291368 kubelet[3599]: E1212 17:29:49.290987 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.293180 kubelet[3599]: E1212 17:29:49.293038 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.293810 kubelet[3599]: W1212 17:29:49.293529 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.293810 kubelet[3599]: E1212 17:29:49.293571 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.297357 kubelet[3599]: E1212 17:29:49.295985 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.297357 kubelet[3599]: W1212 17:29:49.296075 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.297357 kubelet[3599]: E1212 17:29:49.296111 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.298222 kubelet[3599]: E1212 17:29:49.298166 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.298516 kubelet[3599]: W1212 17:29:49.298345 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.298723 kubelet[3599]: E1212 17:29:49.298599 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.300727 kubelet[3599]: E1212 17:29:49.300688 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.301714 containerd[1901]: time="2025-12-12T17:29:49.301247780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sjkq7,Uid:e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d,Namespace:calico-system,Attempt:0,}" Dec 12 17:29:49.302240 kubelet[3599]: W1212 17:29:49.301952 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.302240 kubelet[3599]: E1212 17:29:49.301999 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.305460 kubelet[3599]: E1212 17:29:49.304831 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.305460 kubelet[3599]: W1212 17:29:49.304867 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.305460 kubelet[3599]: E1212 17:29:49.304901 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.306165 kubelet[3599]: E1212 17:29:49.306117 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.306388 kubelet[3599]: W1212 17:29:49.306303 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.306614 kubelet[3599]: E1212 17:29:49.306585 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.370284 containerd[1901]: time="2025-12-12T17:29:49.370104237Z" level=info msg="connecting to shim c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b" address="unix:///run/containerd/s/590d667f8523d6e121385f3f2b3f2ac0f7c3db10daabfa3cec1ccf7997dccec2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:29:49.386813 kubelet[3599]: E1212 17:29:49.386752 3599 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:29:49.388113 kubelet[3599]: W1212 17:29:49.387000 3599 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:29:49.388113 kubelet[3599]: E1212 17:29:49.387104 3599 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:29:49.472751 systemd[1]: Started cri-containerd-c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b.scope - libcontainer container c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b. Dec 12 17:29:49.702439 containerd[1901]: time="2025-12-12T17:29:49.702204670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sjkq7,Uid:e5b843a2-3da0-42e0-9eb8-48bbfe64cd4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b\"" Dec 12 17:29:49.709752 containerd[1901]: time="2025-12-12T17:29:49.709584034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:29:49.749140 containerd[1901]: time="2025-12-12T17:29:49.749065954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7586b756b8-7qkn6,Uid:271675fb-b761-4f89-a7bd-27ce30c7f80a,Namespace:calico-system,Attempt:0,} returns sandbox id \"af123ca2a30ea007d22cce47235ada56fe4adeb867cbd01b6b1316e15f0170b7\"" Dec 12 17:29:50.246838 kubelet[3599]: E1212 17:29:50.246756 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:29:50.844146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764754610.mount: Deactivated successfully. Dec 12 17:29:50.972094 containerd[1901]: time="2025-12-12T17:29:50.972015517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:50.975004 containerd[1901]: time="2025-12-12T17:29:50.974734477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Dec 12 17:29:50.977385 containerd[1901]: time="2025-12-12T17:29:50.977299621Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:50.981722 containerd[1901]: time="2025-12-12T17:29:50.981670573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:50.983611 containerd[1901]: time="2025-12-12T17:29:50.983432461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.273031827s" Dec 12 17:29:50.983611 containerd[1901]: time="2025-12-12T17:29:50.983489401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:29:50.985889 containerd[1901]: time="2025-12-12T17:29:50.985208077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:29:50.993739 containerd[1901]: time="2025-12-12T17:29:50.993682537Z" level=info msg="CreateContainer within sandbox \"c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:29:51.021157 containerd[1901]: time="2025-12-12T17:29:51.021048285Z" level=info msg="Container 6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:51.045774 containerd[1901]: time="2025-12-12T17:29:51.045627237Z" level=info msg="CreateContainer within sandbox \"c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404\"" Dec 12 17:29:51.046548 containerd[1901]: time="2025-12-12T17:29:51.046483629Z" level=info msg="StartContainer for \"6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404\"" Dec 12 17:29:51.050603 containerd[1901]: time="2025-12-12T17:29:51.050510385Z" level=info msg="connecting to shim 6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404" address="unix:///run/containerd/s/590d667f8523d6e121385f3f2b3f2ac0f7c3db10daabfa3cec1ccf7997dccec2" protocol=ttrpc version=3 Dec 12 17:29:51.093605 systemd[1]: Started cri-containerd-6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404.scope - libcontainer container 6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404. Dec 12 17:29:51.215065 containerd[1901]: time="2025-12-12T17:29:51.213954262Z" level=info msg="StartContainer for \"6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404\" returns successfully" Dec 12 17:29:51.245625 systemd[1]: cri-containerd-6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404.scope: Deactivated successfully. Dec 12 17:29:51.258940 containerd[1901]: time="2025-12-12T17:29:51.258884890Z" level=info msg="received container exit event container_id:\"6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404\" id:\"6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404\" pid:4155 exited_at:{seconds:1765560591 nanos:256629946}" Dec 12 17:29:51.315944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6785a08db98259d388958bde88bc2730a90bdcc0a91108e19492eec1ef656404-rootfs.mount: Deactivated successfully. Dec 12 17:29:52.248148 kubelet[3599]: E1212 17:29:52.246462 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:29:53.003360 containerd[1901]: time="2025-12-12T17:29:53.003175427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:53.006931 containerd[1901]: time="2025-12-12T17:29:53.006611075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Dec 12 17:29:53.009049 containerd[1901]: time="2025-12-12T17:29:53.008989775Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:53.015799 containerd[1901]: time="2025-12-12T17:29:53.015730247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:53.017193 containerd[1901]: time="2025-12-12T17:29:53.017148899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.031882934s" Dec 12 17:29:53.017521 containerd[1901]: time="2025-12-12T17:29:53.017353595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:29:53.019577 containerd[1901]: time="2025-12-12T17:29:53.019276223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:29:53.054703 containerd[1901]: time="2025-12-12T17:29:53.054648251Z" level=info msg="CreateContainer within sandbox \"af123ca2a30ea007d22cce47235ada56fe4adeb867cbd01b6b1316e15f0170b7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:29:53.070694 containerd[1901]: time="2025-12-12T17:29:53.070623335Z" level=info msg="Container 1f37a9e96d4f0184dd61cc974d79a796601c454addcc12f78b61cbbeccbb40b4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:53.080638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29827505.mount: Deactivated successfully. Dec 12 17:29:53.094070 containerd[1901]: time="2025-12-12T17:29:53.093953759Z" level=info msg="CreateContainer within sandbox \"af123ca2a30ea007d22cce47235ada56fe4adeb867cbd01b6b1316e15f0170b7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1f37a9e96d4f0184dd61cc974d79a796601c454addcc12f78b61cbbeccbb40b4\"" Dec 12 17:29:53.095821 containerd[1901]: time="2025-12-12T17:29:53.095277359Z" level=info msg="StartContainer for \"1f37a9e96d4f0184dd61cc974d79a796601c454addcc12f78b61cbbeccbb40b4\"" Dec 12 17:29:53.097961 containerd[1901]: time="2025-12-12T17:29:53.097894547Z" level=info msg="connecting to shim 1f37a9e96d4f0184dd61cc974d79a796601c454addcc12f78b61cbbeccbb40b4" address="unix:///run/containerd/s/11ba1f74c50653a67fffa61d6bb10909d01a396cc02c0acfa8c26661e86f1d7a" protocol=ttrpc version=3 Dec 12 17:29:53.140609 systemd[1]: Started cri-containerd-1f37a9e96d4f0184dd61cc974d79a796601c454addcc12f78b61cbbeccbb40b4.scope - libcontainer container 1f37a9e96d4f0184dd61cc974d79a796601c454addcc12f78b61cbbeccbb40b4. Dec 12 17:29:53.226464 containerd[1901]: time="2025-12-12T17:29:53.224172804Z" level=info msg="StartContainer for \"1f37a9e96d4f0184dd61cc974d79a796601c454addcc12f78b61cbbeccbb40b4\" returns successfully" Dec 12 17:29:54.247460 kubelet[3599]: E1212 17:29:54.247366 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:29:54.557461 kubelet[3599]: I1212 17:29:54.557267 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7586b756b8-7qkn6" podStartSLOduration=3.290292838 podStartE2EDuration="6.557247506s" podCreationTimestamp="2025-12-12 17:29:48 +0000 UTC" firstStartedPulling="2025-12-12 17:29:49.751734803 +0000 UTC m=+32.911741209" lastFinishedPulling="2025-12-12 17:29:53.018689447 +0000 UTC m=+36.178695877" observedRunningTime="2025-12-12 17:29:53.554787565 +0000 UTC m=+36.714793995" watchObservedRunningTime="2025-12-12 17:29:54.557247506 +0000 UTC m=+37.717254008" Dec 12 17:29:56.028848 containerd[1901]: time="2025-12-12T17:29:56.028722110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:56.030784 containerd[1901]: time="2025-12-12T17:29:56.030726842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 12 17:29:56.033348 containerd[1901]: time="2025-12-12T17:29:56.033057374Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:56.038895 containerd[1901]: time="2025-12-12T17:29:56.038838974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:29:56.040385 containerd[1901]: time="2025-12-12T17:29:56.040069466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.020714331s" Dec 12 17:29:56.040385 containerd[1901]: time="2025-12-12T17:29:56.040125782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:29:56.049663 containerd[1901]: time="2025-12-12T17:29:56.049608626Z" level=info msg="CreateContainer within sandbox \"c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:29:56.068440 containerd[1901]: time="2025-12-12T17:29:56.068363990Z" level=info msg="Container 5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:56.079380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747164805.mount: Deactivated successfully. Dec 12 17:29:56.091797 containerd[1901]: time="2025-12-12T17:29:56.091737122Z" level=info msg="CreateContainer within sandbox \"c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2\"" Dec 12 17:29:56.093636 containerd[1901]: time="2025-12-12T17:29:56.093564602Z" level=info msg="StartContainer for \"5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2\"" Dec 12 17:29:56.098572 containerd[1901]: time="2025-12-12T17:29:56.098490038Z" level=info msg="connecting to shim 5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2" address="unix:///run/containerd/s/590d667f8523d6e121385f3f2b3f2ac0f7c3db10daabfa3cec1ccf7997dccec2" protocol=ttrpc version=3 Dec 12 17:29:56.148645 systemd[1]: Started cri-containerd-5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2.scope - libcontainer container 5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2. Dec 12 17:29:56.246704 kubelet[3599]: E1212 17:29:56.246613 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:29:56.264760 containerd[1901]: time="2025-12-12T17:29:56.264597231Z" level=info msg="StartContainer for \"5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2\" returns successfully" Dec 12 17:29:57.522361 containerd[1901]: time="2025-12-12T17:29:57.522190169Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:29:57.527527 systemd[1]: cri-containerd-5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2.scope: Deactivated successfully. Dec 12 17:29:57.528045 systemd[1]: cri-containerd-5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2.scope: Consumed 1.019s CPU time, 187.3M memory peak, 165.9M written to disk. Dec 12 17:29:57.531179 containerd[1901]: time="2025-12-12T17:29:57.530962841Z" level=info msg="received container exit event container_id:\"5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2\" id:\"5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2\" pid:4263 exited_at:{seconds:1765560597 nanos:530171969}" Dec 12 17:29:57.583185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f07151c8324630b1ddd973d33556403779f00da7da0720387b6bfda0de464d2-rootfs.mount: Deactivated successfully. Dec 12 17:29:57.603815 kubelet[3599]: I1212 17:29:57.603764 3599 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 17:29:57.762453 systemd[1]: Created slice kubepods-besteffort-podb78f3469_6603_4b67_beed_705184b4511e.slice - libcontainer container kubepods-besteffort-podb78f3469_6603_4b67_beed_705184b4511e.slice. Dec 12 17:29:57.820407 systemd[1]: Created slice kubepods-burstable-podf9068c32_dbae_4f8b_8dc5_106c0f06bde7.slice - libcontainer container kubepods-burstable-podf9068c32_dbae_4f8b_8dc5_106c0f06bde7.slice. Dec 12 17:29:57.839915 kubelet[3599]: I1212 17:29:57.839837 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx67m\" (UniqueName: \"kubernetes.io/projected/b78f3469-6603-4b67-beed-705184b4511e-kube-api-access-rx67m\") pod \"calico-kube-controllers-65c4f9478f-pv7hn\" (UID: \"b78f3469-6603-4b67-beed-705184b4511e\") " pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" Dec 12 17:29:57.855695 kubelet[3599]: I1212 17:29:57.839923 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b78f3469-6603-4b67-beed-705184b4511e-tigera-ca-bundle\") pod \"calico-kube-controllers-65c4f9478f-pv7hn\" (UID: \"b78f3469-6603-4b67-beed-705184b4511e\") " pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" Dec 12 17:29:57.909106 systemd[1]: Created slice kubepods-besteffort-pod37d905b7_8baa_415e_b08a_01c4aafd5651.slice - libcontainer container kubepods-besteffort-pod37d905b7_8baa_415e_b08a_01c4aafd5651.slice. Dec 12 17:29:57.940578 kubelet[3599]: I1212 17:29:57.940529 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9068c32-dbae-4f8b-8dc5-106c0f06bde7-config-volume\") pod \"coredns-66bc5c9577-wkfjd\" (UID: \"f9068c32-dbae-4f8b-8dc5-106c0f06bde7\") " pod="kube-system/coredns-66bc5c9577-wkfjd" Dec 12 17:29:57.940886 kubelet[3599]: I1212 17:29:57.940858 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw2fq\" (UniqueName: \"kubernetes.io/projected/f9068c32-dbae-4f8b-8dc5-106c0f06bde7-kube-api-access-kw2fq\") pod \"coredns-66bc5c9577-wkfjd\" (UID: \"f9068c32-dbae-4f8b-8dc5-106c0f06bde7\") " pod="kube-system/coredns-66bc5c9577-wkfjd" Dec 12 17:29:57.984847 systemd[1]: Created slice kubepods-burstable-pod1eddba4c_1bd5_4118_9720_635877fa49af.slice - libcontainer container kubepods-burstable-pod1eddba4c_1bd5_4118_9720_635877fa49af.slice. Dec 12 17:29:58.041864 kubelet[3599]: I1212 17:29:58.041812 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/37d905b7-8baa-415e-b08a-01c4aafd5651-calico-apiserver-certs\") pod \"calico-apiserver-6bb58fbcd4-g9dtq\" (UID: \"37d905b7-8baa-415e-b08a-01c4aafd5651\") " pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" Dec 12 17:29:58.042334 kubelet[3599]: I1212 17:29:58.042175 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj296\" (UniqueName: \"kubernetes.io/projected/37d905b7-8baa-415e-b08a-01c4aafd5651-kube-api-access-sj296\") pod \"calico-apiserver-6bb58fbcd4-g9dtq\" (UID: \"37d905b7-8baa-415e-b08a-01c4aafd5651\") " pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" Dec 12 17:29:58.087294 systemd[1]: Created slice kubepods-besteffort-pod358ee8cb_07e7_4336_8448_2d22cafc7817.slice - libcontainer container kubepods-besteffort-pod358ee8cb_07e7_4336_8448_2d22cafc7817.slice. Dec 12 17:29:58.143716 kubelet[3599]: I1212 17:29:58.143632 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572wn\" (UniqueName: \"kubernetes.io/projected/1eddba4c-1bd5-4118-9720-635877fa49af-kube-api-access-572wn\") pod \"coredns-66bc5c9577-nrkl4\" (UID: \"1eddba4c-1bd5-4118-9720-635877fa49af\") " pod="kube-system/coredns-66bc5c9577-nrkl4" Dec 12 17:29:58.143957 kubelet[3599]: I1212 17:29:58.143898 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eddba4c-1bd5-4118-9720-635877fa49af-config-volume\") pod \"coredns-66bc5c9577-nrkl4\" (UID: \"1eddba4c-1bd5-4118-9720-635877fa49af\") " pod="kube-system/coredns-66bc5c9577-nrkl4" Dec 12 17:29:58.150957 containerd[1901]: time="2025-12-12T17:29:58.149300920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c4f9478f-pv7hn,Uid:b78f3469-6603-4b67-beed-705184b4511e,Namespace:calico-system,Attempt:0,}" Dec 12 17:29:58.214403 systemd[1]: Created slice kubepods-besteffort-pod68e53e1a_54da_4cf3_b329_4a29532261fd.slice - libcontainer container kubepods-besteffort-pod68e53e1a_54da_4cf3_b329_4a29532261fd.slice. Dec 12 17:29:58.245200 kubelet[3599]: I1212 17:29:58.245150 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68e53e1a-54da-4cf3-b329-4a29532261fd-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-224vp\" (UID: \"68e53e1a-54da-4cf3-b329-4a29532261fd\") " pod="calico-system/goldmane-7c778bb748-224vp" Dec 12 17:29:58.252470 kubelet[3599]: I1212 17:29:58.245653 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/68e53e1a-54da-4cf3-b329-4a29532261fd-goldmane-key-pair\") pod \"goldmane-7c778bb748-224vp\" (UID: \"68e53e1a-54da-4cf3-b329-4a29532261fd\") " pod="calico-system/goldmane-7c778bb748-224vp" Dec 12 17:29:58.252470 kubelet[3599]: I1212 17:29:58.245705 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrxn\" (UniqueName: \"kubernetes.io/projected/358ee8cb-07e7-4336-8448-2d22cafc7817-kube-api-access-lvrxn\") pod \"calico-apiserver-6bb58fbcd4-x7fhr\" (UID: \"358ee8cb-07e7-4336-8448-2d22cafc7817\") " pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" Dec 12 17:29:58.252470 kubelet[3599]: I1212 17:29:58.245790 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/358ee8cb-07e7-4336-8448-2d22cafc7817-calico-apiserver-certs\") pod \"calico-apiserver-6bb58fbcd4-x7fhr\" (UID: \"358ee8cb-07e7-4336-8448-2d22cafc7817\") " pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" Dec 12 17:29:58.252470 kubelet[3599]: I1212 17:29:58.245842 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4shc7\" (UniqueName: \"kubernetes.io/projected/68e53e1a-54da-4cf3-b329-4a29532261fd-kube-api-access-4shc7\") pod \"goldmane-7c778bb748-224vp\" (UID: \"68e53e1a-54da-4cf3-b329-4a29532261fd\") " pod="calico-system/goldmane-7c778bb748-224vp" Dec 12 17:29:58.252470 kubelet[3599]: I1212 17:29:58.246051 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68e53e1a-54da-4cf3-b329-4a29532261fd-config\") pod \"goldmane-7c778bb748-224vp\" (UID: \"68e53e1a-54da-4cf3-b329-4a29532261fd\") " pod="calico-system/goldmane-7c778bb748-224vp" Dec 12 17:29:58.266280 containerd[1901]: time="2025-12-12T17:29:58.266189873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wkfjd,Uid:f9068c32-dbae-4f8b-8dc5-106c0f06bde7,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:58.285084 systemd[1]: Created slice kubepods-besteffort-poddfaeee63_32d9_4902_9d2a_576429123236.slice - libcontainer container kubepods-besteffort-poddfaeee63_32d9_4902_9d2a_576429123236.slice. Dec 12 17:29:58.302951 systemd[1]: Created slice kubepods-besteffort-pod7ebbdf18_ecc6_4daf_a813_62e2bac25944.slice - libcontainer container kubepods-besteffort-pod7ebbdf18_ecc6_4daf_a813_62e2bac25944.slice. Dec 12 17:29:58.328681 containerd[1901]: time="2025-12-12T17:29:58.328294133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bb58fbcd4-g9dtq,Uid:37d905b7-8baa-415e-b08a-01c4aafd5651,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:29:58.345267 containerd[1901]: time="2025-12-12T17:29:58.344957549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrkl4,Uid:1eddba4c-1bd5-4118-9720-635877fa49af,Namespace:kube-system,Attempt:0,}" Dec 12 17:29:58.347663 kubelet[3599]: I1212 17:29:58.347521 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ebbdf18-ecc6-4daf-a813-62e2bac25944-whisker-ca-bundle\") pod \"whisker-5cb64ddbb6-k682t\" (UID: \"7ebbdf18-ecc6-4daf-a813-62e2bac25944\") " pod="calico-system/whisker-5cb64ddbb6-k682t" Dec 12 17:29:58.348098 kubelet[3599]: I1212 17:29:58.347638 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8t2g\" (UniqueName: \"kubernetes.io/projected/7ebbdf18-ecc6-4daf-a813-62e2bac25944-kube-api-access-h8t2g\") pod \"whisker-5cb64ddbb6-k682t\" (UID: \"7ebbdf18-ecc6-4daf-a813-62e2bac25944\") " pod="calico-system/whisker-5cb64ddbb6-k682t" Dec 12 17:29:58.348610 kubelet[3599]: I1212 17:29:58.348490 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ebbdf18-ecc6-4daf-a813-62e2bac25944-whisker-backend-key-pair\") pod \"whisker-5cb64ddbb6-k682t\" (UID: \"7ebbdf18-ecc6-4daf-a813-62e2bac25944\") " pod="calico-system/whisker-5cb64ddbb6-k682t" Dec 12 17:29:58.421125 containerd[1901]: time="2025-12-12T17:29:58.421059438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ljkxc,Uid:dfaeee63-32d9-4902-9d2a-576429123236,Namespace:calico-system,Attempt:0,}" Dec 12 17:29:58.439960 containerd[1901]: time="2025-12-12T17:29:58.439865394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bb58fbcd4-x7fhr,Uid:358ee8cb-07e7-4336-8448-2d22cafc7817,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:29:58.527268 containerd[1901]: time="2025-12-12T17:29:58.527194458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-224vp,Uid:68e53e1a-54da-4cf3-b329-4a29532261fd,Namespace:calico-system,Attempt:0,}" Dec 12 17:29:58.634703 containerd[1901]: time="2025-12-12T17:29:58.634496803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb64ddbb6-k682t,Uid:7ebbdf18-ecc6-4daf-a813-62e2bac25944,Namespace:calico-system,Attempt:0,}" Dec 12 17:29:58.648768 containerd[1901]: time="2025-12-12T17:29:58.646165807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:29:58.766736 containerd[1901]: time="2025-12-12T17:29:58.766671763Z" level=error msg="Failed to destroy network for sandbox \"94aeb11dae39b7f42073848cf9fe081f46541c4ec911c4efb8243e4adff04e45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:58.775210 containerd[1901]: time="2025-12-12T17:29:58.775113331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c4f9478f-pv7hn,Uid:b78f3469-6603-4b67-beed-705184b4511e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94aeb11dae39b7f42073848cf9fe081f46541c4ec911c4efb8243e4adff04e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:58.777512 kubelet[3599]: E1212 17:29:58.775582 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94aeb11dae39b7f42073848cf9fe081f46541c4ec911c4efb8243e4adff04e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:58.777512 kubelet[3599]: E1212 17:29:58.775835 3599 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94aeb11dae39b7f42073848cf9fe081f46541c4ec911c4efb8243e4adff04e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" Dec 12 17:29:58.777512 kubelet[3599]: E1212 17:29:58.775918 3599 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94aeb11dae39b7f42073848cf9fe081f46541c4ec911c4efb8243e4adff04e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" Dec 12 17:29:58.780688 kubelet[3599]: E1212 17:29:58.776980 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65c4f9478f-pv7hn_calico-system(b78f3469-6603-4b67-beed-705184b4511e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65c4f9478f-pv7hn_calico-system(b78f3469-6603-4b67-beed-705184b4511e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94aeb11dae39b7f42073848cf9fe081f46541c4ec911c4efb8243e4adff04e45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:29:58.778355 systemd[1]: run-netns-cni\x2d4fbde704\x2df346\x2deeb7\x2dd515\x2deb937ffac53f.mount: Deactivated successfully. Dec 12 17:29:58.968382 containerd[1901]: time="2025-12-12T17:29:58.966031604Z" level=error msg="Failed to destroy network for sandbox \"3551bc740adbdd2d43fd3df008da63146987cd60e48fb1e2ad2c63e3ad15c044\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:58.978378 containerd[1901]: time="2025-12-12T17:29:58.977487224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bb58fbcd4-g9dtq,Uid:37d905b7-8baa-415e-b08a-01c4aafd5651,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3551bc740adbdd2d43fd3df008da63146987cd60e48fb1e2ad2c63e3ad15c044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:58.978614 kubelet[3599]: E1212 17:29:58.977843 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3551bc740adbdd2d43fd3df008da63146987cd60e48fb1e2ad2c63e3ad15c044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:58.978614 kubelet[3599]: E1212 17:29:58.977965 3599 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3551bc740adbdd2d43fd3df008da63146987cd60e48fb1e2ad2c63e3ad15c044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" Dec 12 17:29:58.978614 kubelet[3599]: E1212 17:29:58.977997 3599 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3551bc740adbdd2d43fd3df008da63146987cd60e48fb1e2ad2c63e3ad15c044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" Dec 12 17:29:58.977623 systemd[1]: run-netns-cni\x2d92644ed3\x2df9ec\x2d5153\x2d2660\x2d0f300c4e5a13.mount: Deactivated successfully. Dec 12 17:29:58.978914 kubelet[3599]: E1212 17:29:58.978096 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bb58fbcd4-g9dtq_calico-apiserver(37d905b7-8baa-415e-b08a-01c4aafd5651)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bb58fbcd4-g9dtq_calico-apiserver(37d905b7-8baa-415e-b08a-01c4aafd5651)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3551bc740adbdd2d43fd3df008da63146987cd60e48fb1e2ad2c63e3ad15c044\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:29:59.005600 containerd[1901]: time="2025-12-12T17:29:59.005496472Z" level=error msg="Failed to destroy network for sandbox \"292d4fcd919994ff0e18d71157e26418f576577853cd429402279a3ec1a927f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.009583 containerd[1901]: time="2025-12-12T17:29:59.008537620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wkfjd,Uid:f9068c32-dbae-4f8b-8dc5-106c0f06bde7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"292d4fcd919994ff0e18d71157e26418f576577853cd429402279a3ec1a927f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.009721 kubelet[3599]: E1212 17:29:59.008967 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"292d4fcd919994ff0e18d71157e26418f576577853cd429402279a3ec1a927f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.009721 kubelet[3599]: E1212 17:29:59.009598 3599 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"292d4fcd919994ff0e18d71157e26418f576577853cd429402279a3ec1a927f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wkfjd" Dec 12 17:29:59.009721 kubelet[3599]: E1212 17:29:59.009649 3599 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"292d4fcd919994ff0e18d71157e26418f576577853cd429402279a3ec1a927f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wkfjd" Dec 12 17:29:59.014706 kubelet[3599]: E1212 17:29:59.010306 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wkfjd_kube-system(f9068c32-dbae-4f8b-8dc5-106c0f06bde7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wkfjd_kube-system(f9068c32-dbae-4f8b-8dc5-106c0f06bde7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"292d4fcd919994ff0e18d71157e26418f576577853cd429402279a3ec1a927f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wkfjd" podUID="f9068c32-dbae-4f8b-8dc5-106c0f06bde7" Dec 12 17:29:59.015807 systemd[1]: run-netns-cni\x2d0e6e6b50\x2da19e\x2d60f6\x2dd43a\x2dfc6378a15b1b.mount: Deactivated successfully. Dec 12 17:29:59.035600 containerd[1901]: time="2025-12-12T17:29:59.035511185Z" level=error msg="Failed to destroy network for sandbox \"e1a83586410bc93e64724c49e7b81c99831e47cb5e0ddfb69410bb3b72358ebd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.041967 containerd[1901]: time="2025-12-12T17:29:59.041873225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cb64ddbb6-k682t,Uid:7ebbdf18-ecc6-4daf-a813-62e2bac25944,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a83586410bc93e64724c49e7b81c99831e47cb5e0ddfb69410bb3b72358ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.043273 kubelet[3599]: E1212 17:29:59.043213 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a83586410bc93e64724c49e7b81c99831e47cb5e0ddfb69410bb3b72358ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.043508 kubelet[3599]: E1212 17:29:59.043296 3599 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a83586410bc93e64724c49e7b81c99831e47cb5e0ddfb69410bb3b72358ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cb64ddbb6-k682t" Dec 12 17:29:59.043508 kubelet[3599]: E1212 17:29:59.043367 3599 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1a83586410bc93e64724c49e7b81c99831e47cb5e0ddfb69410bb3b72358ebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cb64ddbb6-k682t" Dec 12 17:29:59.043508 kubelet[3599]: E1212 17:29:59.043441 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cb64ddbb6-k682t_calico-system(7ebbdf18-ecc6-4daf-a813-62e2bac25944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cb64ddbb6-k682t_calico-system(7ebbdf18-ecc6-4daf-a813-62e2bac25944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1a83586410bc93e64724c49e7b81c99831e47cb5e0ddfb69410bb3b72358ebd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cb64ddbb6-k682t" podUID="7ebbdf18-ecc6-4daf-a813-62e2bac25944" Dec 12 17:29:59.051749 containerd[1901]: time="2025-12-12T17:29:59.051475121Z" level=error msg="Failed to destroy network for sandbox \"d808c97377d90017df51c00f9f97a638a932a92e9d90ef9f11aeb8a1942c5748\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.059745 containerd[1901]: time="2025-12-12T17:29:59.059615081Z" level=error msg="Failed to destroy network for sandbox \"8659a9ebedfb067f159f633a577aff2564f1e3a2db67b8579069c6b5fec00899\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.064341 containerd[1901]: time="2025-12-12T17:29:59.063830285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ljkxc,Uid:dfaeee63-32d9-4902-9d2a-576429123236,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d808c97377d90017df51c00f9f97a638a932a92e9d90ef9f11aeb8a1942c5748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.065304 kubelet[3599]: E1212 17:29:59.065209 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d808c97377d90017df51c00f9f97a638a932a92e9d90ef9f11aeb8a1942c5748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.066011 kubelet[3599]: E1212 17:29:59.065490 3599 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d808c97377d90017df51c00f9f97a638a932a92e9d90ef9f11aeb8a1942c5748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ljkxc" Dec 12 17:29:59.066011 kubelet[3599]: E1212 17:29:59.066003 3599 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d808c97377d90017df51c00f9f97a638a932a92e9d90ef9f11aeb8a1942c5748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ljkxc" Dec 12 17:29:59.066456 kubelet[3599]: E1212 17:29:59.066137 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d808c97377d90017df51c00f9f97a638a932a92e9d90ef9f11aeb8a1942c5748\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:29:59.068612 containerd[1901]: time="2025-12-12T17:29:59.068389205Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bb58fbcd4-x7fhr,Uid:358ee8cb-07e7-4336-8448-2d22cafc7817,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8659a9ebedfb067f159f633a577aff2564f1e3a2db67b8579069c6b5fec00899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.069346 kubelet[3599]: E1212 17:29:59.069051 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8659a9ebedfb067f159f633a577aff2564f1e3a2db67b8579069c6b5fec00899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.070445 kubelet[3599]: E1212 17:29:59.069376 3599 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8659a9ebedfb067f159f633a577aff2564f1e3a2db67b8579069c6b5fec00899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" Dec 12 17:29:59.070445 kubelet[3599]: E1212 17:29:59.069445 3599 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8659a9ebedfb067f159f633a577aff2564f1e3a2db67b8579069c6b5fec00899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" Dec 12 17:29:59.070445 kubelet[3599]: E1212 17:29:59.069690 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bb58fbcd4-x7fhr_calico-apiserver(358ee8cb-07e7-4336-8448-2d22cafc7817)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bb58fbcd4-x7fhr_calico-apiserver(358ee8cb-07e7-4336-8448-2d22cafc7817)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8659a9ebedfb067f159f633a577aff2564f1e3a2db67b8579069c6b5fec00899\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:29:59.075022 containerd[1901]: time="2025-12-12T17:29:59.074759909Z" level=error msg="Failed to destroy network for sandbox \"f089ea809845586666a8f4c5c332a8aaa26f0a53d4bb51ada72c1c036b6e9632\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.080263 containerd[1901]: time="2025-12-12T17:29:59.080103437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-224vp,Uid:68e53e1a-54da-4cf3-b329-4a29532261fd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f089ea809845586666a8f4c5c332a8aaa26f0a53d4bb51ada72c1c036b6e9632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.080535 containerd[1901]: time="2025-12-12T17:29:59.080185577Z" level=error msg="Failed to destroy network for sandbox \"a55ef178ba5c9214eb73f346dc60415a3579af0dcb0fd33171719eae22a720c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.081225 kubelet[3599]: E1212 17:29:59.080968 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f089ea809845586666a8f4c5c332a8aaa26f0a53d4bb51ada72c1c036b6e9632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.082926 kubelet[3599]: E1212 17:29:59.081498 3599 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f089ea809845586666a8f4c5c332a8aaa26f0a53d4bb51ada72c1c036b6e9632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-224vp" Dec 12 17:29:59.082926 kubelet[3599]: E1212 17:29:59.082699 3599 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f089ea809845586666a8f4c5c332a8aaa26f0a53d4bb51ada72c1c036b6e9632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-224vp" Dec 12 17:29:59.082926 kubelet[3599]: E1212 17:29:59.082825 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-224vp_calico-system(68e53e1a-54da-4cf3-b329-4a29532261fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-224vp_calico-system(68e53e1a-54da-4cf3-b329-4a29532261fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f089ea809845586666a8f4c5c332a8aaa26f0a53d4bb51ada72c1c036b6e9632\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:29:59.089169 containerd[1901]: time="2025-12-12T17:29:59.088395785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrkl4,Uid:1eddba4c-1bd5-4118-9720-635877fa49af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a55ef178ba5c9214eb73f346dc60415a3579af0dcb0fd33171719eae22a720c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.090654 kubelet[3599]: E1212 17:29:59.089820 3599 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a55ef178ba5c9214eb73f346dc60415a3579af0dcb0fd33171719eae22a720c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:29:59.090654 kubelet[3599]: E1212 17:29:59.089896 3599 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a55ef178ba5c9214eb73f346dc60415a3579af0dcb0fd33171719eae22a720c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nrkl4" Dec 12 17:29:59.090654 kubelet[3599]: E1212 17:29:59.089933 3599 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a55ef178ba5c9214eb73f346dc60415a3579af0dcb0fd33171719eae22a720c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nrkl4" Dec 12 17:29:59.090886 kubelet[3599]: E1212 17:29:59.090026 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nrkl4_kube-system(1eddba4c-1bd5-4118-9720-635877fa49af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nrkl4_kube-system(1eddba4c-1bd5-4118-9720-635877fa49af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a55ef178ba5c9214eb73f346dc60415a3579af0dcb0fd33171719eae22a720c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nrkl4" podUID="1eddba4c-1bd5-4118-9720-635877fa49af" Dec 12 17:29:59.583659 systemd[1]: run-netns-cni\x2dc0f66af0\x2d5fe8\x2d5396\x2d71fb\x2dcb768fc17c70.mount: Deactivated successfully. Dec 12 17:29:59.583846 systemd[1]: run-netns-cni\x2d1cddc91f\x2da757\x2d078c\x2d6a77\x2db63aecd64193.mount: Deactivated successfully. Dec 12 17:29:59.583966 systemd[1]: run-netns-cni\x2d94f74651\x2d613e\x2d55d4\x2d8c05\x2dbd377c0d6847.mount: Deactivated successfully. Dec 12 17:29:59.584089 systemd[1]: run-netns-cni\x2d78ffe2d2\x2d7898\x2df68f\x2d1440\x2d816cbe71b2f1.mount: Deactivated successfully. Dec 12 17:29:59.584212 systemd[1]: run-netns-cni\x2d4896cb0a\x2d48b6\x2db101\x2dd221\x2de08b6960cb45.mount: Deactivated successfully. Dec 12 17:30:05.049440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373073286.mount: Deactivated successfully. Dec 12 17:30:05.102649 containerd[1901]: time="2025-12-12T17:30:05.102569711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:30:05.104552 containerd[1901]: time="2025-12-12T17:30:05.104478263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 12 17:30:05.106985 containerd[1901]: time="2025-12-12T17:30:05.106909055Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:30:05.111357 containerd[1901]: time="2025-12-12T17:30:05.111238463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:30:05.112713 containerd[1901]: time="2025-12-12T17:30:05.112407167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.46617188s" Dec 12 17:30:05.112713 containerd[1901]: time="2025-12-12T17:30:05.112465151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:30:05.137130 containerd[1901]: time="2025-12-12T17:30:05.137066123Z" level=info msg="CreateContainer within sandbox \"c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:30:05.160347 containerd[1901]: time="2025-12-12T17:30:05.160182923Z" level=info msg="Container d5180318f520470fadc7652cafcfe263068d2ebb25f7342b79f5fc8e86536a2f: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:30:05.188190 containerd[1901]: time="2025-12-12T17:30:05.188110019Z" level=info msg="CreateContainer within sandbox \"c9394c9fff16edf0777e386ade5f024e9247b89e340fce8d21a9e448aea2348b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d5180318f520470fadc7652cafcfe263068d2ebb25f7342b79f5fc8e86536a2f\"" Dec 12 17:30:05.189944 containerd[1901]: time="2025-12-12T17:30:05.189886691Z" level=info msg="StartContainer for \"d5180318f520470fadc7652cafcfe263068d2ebb25f7342b79f5fc8e86536a2f\"" Dec 12 17:30:05.193941 containerd[1901]: time="2025-12-12T17:30:05.193867595Z" level=info msg="connecting to shim d5180318f520470fadc7652cafcfe263068d2ebb25f7342b79f5fc8e86536a2f" address="unix:///run/containerd/s/590d667f8523d6e121385f3f2b3f2ac0f7c3db10daabfa3cec1ccf7997dccec2" protocol=ttrpc version=3 Dec 12 17:30:05.244732 systemd[1]: Started cri-containerd-d5180318f520470fadc7652cafcfe263068d2ebb25f7342b79f5fc8e86536a2f.scope - libcontainer container d5180318f520470fadc7652cafcfe263068d2ebb25f7342b79f5fc8e86536a2f. Dec 12 17:30:05.364778 containerd[1901]: time="2025-12-12T17:30:05.364553052Z" level=info msg="StartContainer for \"d5180318f520470fadc7652cafcfe263068d2ebb25f7342b79f5fc8e86536a2f\" returns successfully" Dec 12 17:30:05.641776 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:30:05.641973 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:30:05.740160 kubelet[3599]: I1212 17:30:05.739984 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sjkq7" podStartSLOduration=2.332333205 podStartE2EDuration="17.739898678s" podCreationTimestamp="2025-12-12 17:29:48 +0000 UTC" firstStartedPulling="2025-12-12 17:29:49.706655122 +0000 UTC m=+32.866661540" lastFinishedPulling="2025-12-12 17:30:05.114220607 +0000 UTC m=+48.274227013" observedRunningTime="2025-12-12 17:30:05.737832134 +0000 UTC m=+48.897838564" watchObservedRunningTime="2025-12-12 17:30:05.739898678 +0000 UTC m=+48.899905108" Dec 12 17:30:06.021713 kubelet[3599]: I1212 17:30:06.021637 3599 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ebbdf18-ecc6-4daf-a813-62e2bac25944-whisker-backend-key-pair\") pod \"7ebbdf18-ecc6-4daf-a813-62e2bac25944\" (UID: \"7ebbdf18-ecc6-4daf-a813-62e2bac25944\") " Dec 12 17:30:06.021881 kubelet[3599]: I1212 17:30:06.021747 3599 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8t2g\" (UniqueName: \"kubernetes.io/projected/7ebbdf18-ecc6-4daf-a813-62e2bac25944-kube-api-access-h8t2g\") pod \"7ebbdf18-ecc6-4daf-a813-62e2bac25944\" (UID: \"7ebbdf18-ecc6-4daf-a813-62e2bac25944\") " Dec 12 17:30:06.021881 kubelet[3599]: I1212 17:30:06.021800 3599 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ebbdf18-ecc6-4daf-a813-62e2bac25944-whisker-ca-bundle\") pod \"7ebbdf18-ecc6-4daf-a813-62e2bac25944\" (UID: \"7ebbdf18-ecc6-4daf-a813-62e2bac25944\") " Dec 12 17:30:06.022980 kubelet[3599]: I1212 17:30:06.022523 3599 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ebbdf18-ecc6-4daf-a813-62e2bac25944-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7ebbdf18-ecc6-4daf-a813-62e2bac25944" (UID: "7ebbdf18-ecc6-4daf-a813-62e2bac25944"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:30:06.035013 kubelet[3599]: I1212 17:30:06.034946 3599 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ebbdf18-ecc6-4daf-a813-62e2bac25944-kube-api-access-h8t2g" (OuterVolumeSpecName: "kube-api-access-h8t2g") pod "7ebbdf18-ecc6-4daf-a813-62e2bac25944" (UID: "7ebbdf18-ecc6-4daf-a813-62e2bac25944"). InnerVolumeSpecName "kube-api-access-h8t2g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:30:06.037093 kubelet[3599]: I1212 17:30:06.036745 3599 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ebbdf18-ecc6-4daf-a813-62e2bac25944-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7ebbdf18-ecc6-4daf-a813-62e2bac25944" (UID: "7ebbdf18-ecc6-4daf-a813-62e2bac25944"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:30:06.048202 systemd[1]: var-lib-kubelet-pods-7ebbdf18\x2decc6\x2d4daf\x2da813\x2d62e2bac25944-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh8t2g.mount: Deactivated successfully. Dec 12 17:30:06.048464 systemd[1]: var-lib-kubelet-pods-7ebbdf18\x2decc6\x2d4daf\x2da813\x2d62e2bac25944-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:30:06.123115 kubelet[3599]: I1212 17:30:06.123047 3599 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8t2g\" (UniqueName: \"kubernetes.io/projected/7ebbdf18-ecc6-4daf-a813-62e2bac25944-kube-api-access-h8t2g\") on node \"ip-172-31-24-26\" DevicePath \"\"" Dec 12 17:30:06.123115 kubelet[3599]: I1212 17:30:06.123104 3599 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ebbdf18-ecc6-4daf-a813-62e2bac25944-whisker-ca-bundle\") on node \"ip-172-31-24-26\" DevicePath \"\"" Dec 12 17:30:06.123355 kubelet[3599]: I1212 17:30:06.123129 3599 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7ebbdf18-ecc6-4daf-a813-62e2bac25944-whisker-backend-key-pair\") on node \"ip-172-31-24-26\" DevicePath \"\"" Dec 12 17:30:06.693286 systemd[1]: Removed slice kubepods-besteffort-pod7ebbdf18_ecc6_4daf_a813_62e2bac25944.slice - libcontainer container kubepods-besteffort-pod7ebbdf18_ecc6_4daf_a813_62e2bac25944.slice. Dec 12 17:30:06.819014 systemd[1]: Created slice kubepods-besteffort-pod9da8aa8d_66f3_492c_808d_d01d872ee6b8.slice - libcontainer container kubepods-besteffort-pod9da8aa8d_66f3_492c_808d_d01d872ee6b8.slice. Dec 12 17:30:06.929636 kubelet[3599]: I1212 17:30:06.929572 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9da8aa8d-66f3-492c-808d-d01d872ee6b8-whisker-backend-key-pair\") pod \"whisker-6ccf855d9b-zb2xt\" (UID: \"9da8aa8d-66f3-492c-808d-d01d872ee6b8\") " pod="calico-system/whisker-6ccf855d9b-zb2xt" Dec 12 17:30:06.930259 kubelet[3599]: I1212 17:30:06.929660 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9da8aa8d-66f3-492c-808d-d01d872ee6b8-whisker-ca-bundle\") pod \"whisker-6ccf855d9b-zb2xt\" (UID: \"9da8aa8d-66f3-492c-808d-d01d872ee6b8\") " pod="calico-system/whisker-6ccf855d9b-zb2xt" Dec 12 17:30:06.930259 kubelet[3599]: I1212 17:30:06.929738 3599 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg85j\" (UniqueName: \"kubernetes.io/projected/9da8aa8d-66f3-492c-808d-d01d872ee6b8-kube-api-access-fg85j\") pod \"whisker-6ccf855d9b-zb2xt\" (UID: \"9da8aa8d-66f3-492c-808d-d01d872ee6b8\") " pod="calico-system/whisker-6ccf855d9b-zb2xt" Dec 12 17:30:07.136771 containerd[1901]: time="2025-12-12T17:30:07.136680433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccf855d9b-zb2xt,Uid:9da8aa8d-66f3-492c-808d-d01d872ee6b8,Namespace:calico-system,Attempt:0,}" Dec 12 17:30:07.259271 kubelet[3599]: I1212 17:30:07.258944 3599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ebbdf18-ecc6-4daf-a813-62e2bac25944" path="/var/lib/kubelet/pods/7ebbdf18-ecc6-4daf-a813-62e2bac25944/volumes" Dec 12 17:30:07.461993 (udev-worker)[4552]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:30:07.463947 systemd-networkd[1813]: cali78920951076: Link UP Dec 12 17:30:07.464813 systemd-networkd[1813]: cali78920951076: Gained carrier Dec 12 17:30:07.496553 containerd[1901]: 2025-12-12 17:30:07.189 [INFO][4635] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:30:07.496553 containerd[1901]: 2025-12-12 17:30:07.273 [INFO][4635] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0 whisker-6ccf855d9b- calico-system 9da8aa8d-66f3-492c-808d-d01d872ee6b8 942 0 2025-12-12 17:30:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6ccf855d9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-26 whisker-6ccf855d9b-zb2xt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali78920951076 [] [] }} ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Namespace="calico-system" Pod="whisker-6ccf855d9b-zb2xt" WorkloadEndpoint="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-" Dec 12 17:30:07.496553 containerd[1901]: 2025-12-12 17:30:07.273 [INFO][4635] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Namespace="calico-system" Pod="whisker-6ccf855d9b-zb2xt" WorkloadEndpoint="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" Dec 12 17:30:07.496553 containerd[1901]: 2025-12-12 17:30:07.360 [INFO][4645] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" HandleID="k8s-pod-network.f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Workload="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.360 [INFO][4645] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" HandleID="k8s-pod-network.f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Workload="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031b920), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-26", "pod":"whisker-6ccf855d9b-zb2xt", "timestamp":"2025-12-12 17:30:07.360089282 +0000 UTC"}, Hostname:"ip-172-31-24-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.360 [INFO][4645] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.360 [INFO][4645] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.360 [INFO][4645] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-26' Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.377 [INFO][4645] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" host="ip-172-31-24-26" Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.397 [INFO][4645] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-26" Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.408 [INFO][4645] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.413 [INFO][4645] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.420 [INFO][4645] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:07.496921 containerd[1901]: 2025-12-12 17:30:07.420 [INFO][4645] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" host="ip-172-31-24-26" Dec 12 17:30:07.498190 containerd[1901]: 2025-12-12 17:30:07.425 [INFO][4645] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d Dec 12 17:30:07.498190 containerd[1901]: 2025-12-12 17:30:07.432 [INFO][4645] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" host="ip-172-31-24-26" Dec 12 17:30:07.498190 containerd[1901]: 2025-12-12 17:30:07.442 [INFO][4645] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.193/26] block=192.168.1.192/26 handle="k8s-pod-network.f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" host="ip-172-31-24-26" Dec 12 17:30:07.498190 containerd[1901]: 2025-12-12 17:30:07.442 [INFO][4645] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.193/26] handle="k8s-pod-network.f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" host="ip-172-31-24-26" Dec 12 17:30:07.498190 containerd[1901]: 2025-12-12 17:30:07.442 [INFO][4645] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:30:07.498190 containerd[1901]: 2025-12-12 17:30:07.442 [INFO][4645] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.193/26] IPv6=[] ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" HandleID="k8s-pod-network.f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Workload="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" Dec 12 17:30:07.498718 containerd[1901]: 2025-12-12 17:30:07.449 [INFO][4635] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Namespace="calico-system" Pod="whisker-6ccf855d9b-zb2xt" WorkloadEndpoint="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0", GenerateName:"whisker-6ccf855d9b-", Namespace:"calico-system", SelfLink:"", UID:"9da8aa8d-66f3-492c-808d-d01d872ee6b8", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ccf855d9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"", Pod:"whisker-6ccf855d9b-zb2xt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.1.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali78920951076", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:07.498718 containerd[1901]: 2025-12-12 17:30:07.450 [INFO][4635] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.193/32] ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Namespace="calico-system" Pod="whisker-6ccf855d9b-zb2xt" WorkloadEndpoint="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" Dec 12 17:30:07.498913 containerd[1901]: 2025-12-12 17:30:07.450 [INFO][4635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78920951076 ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Namespace="calico-system" Pod="whisker-6ccf855d9b-zb2xt" WorkloadEndpoint="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" Dec 12 17:30:07.498913 containerd[1901]: 2025-12-12 17:30:07.465 [INFO][4635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Namespace="calico-system" Pod="whisker-6ccf855d9b-zb2xt" WorkloadEndpoint="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" Dec 12 17:30:07.499043 containerd[1901]: 2025-12-12 17:30:07.466 [INFO][4635] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Namespace="calico-system" Pod="whisker-6ccf855d9b-zb2xt" WorkloadEndpoint="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0", GenerateName:"whisker-6ccf855d9b-", Namespace:"calico-system", SelfLink:"", UID:"9da8aa8d-66f3-492c-808d-d01d872ee6b8", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 30, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6ccf855d9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d", Pod:"whisker-6ccf855d9b-zb2xt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.1.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali78920951076", MAC:"76:73:a9:00:b5:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:07.499485 containerd[1901]: 2025-12-12 17:30:07.492 [INFO][4635] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" Namespace="calico-system" Pod="whisker-6ccf855d9b-zb2xt" WorkloadEndpoint="ip--172--31--24--26-k8s-whisker--6ccf855d9b--zb2xt-eth0" Dec 12 17:30:07.543367 containerd[1901]: time="2025-12-12T17:30:07.543099675Z" level=info msg="connecting to shim f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d" address="unix:///run/containerd/s/555053b6450581b5143c9a895c482e0365e81e2844d03de886c9f5f0ce4ee96d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:30:07.595648 systemd[1]: Started cri-containerd-f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d.scope - libcontainer container f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d. Dec 12 17:30:07.794893 containerd[1901]: time="2025-12-12T17:30:07.794839816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6ccf855d9b-zb2xt,Uid:9da8aa8d-66f3-492c-808d-d01d872ee6b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2df146e1cb09c0d923a193a0abc571e809b88db301aa592c961b0ef709e362d\"" Dec 12 17:30:07.801485 containerd[1901]: time="2025-12-12T17:30:07.800973328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:30:08.074443 containerd[1901]: time="2025-12-12T17:30:08.074251970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:08.078226 containerd[1901]: time="2025-12-12T17:30:08.078102674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:30:08.078572 containerd[1901]: time="2025-12-12T17:30:08.078153578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:30:08.079020 kubelet[3599]: E1212 17:30:08.078903 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:30:08.079020 kubelet[3599]: E1212 17:30:08.078980 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:30:08.080356 kubelet[3599]: E1212 17:30:08.079982 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6ccf855d9b-zb2xt_calico-system(9da8aa8d-66f3-492c-808d-d01d872ee6b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:08.082051 containerd[1901]: time="2025-12-12T17:30:08.081693422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:30:08.344116 containerd[1901]: time="2025-12-12T17:30:08.343975863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:08.347488 containerd[1901]: time="2025-12-12T17:30:08.347362587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:30:08.348037 containerd[1901]: time="2025-12-12T17:30:08.347678883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:30:08.348129 kubelet[3599]: E1212 17:30:08.348020 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:30:08.348129 kubelet[3599]: E1212 17:30:08.348083 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:30:08.348245 kubelet[3599]: E1212 17:30:08.348185 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6ccf855d9b-zb2xt_calico-system(9da8aa8d-66f3-492c-808d-d01d872ee6b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:08.348299 kubelet[3599]: E1212 17:30:08.348253 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:30:08.615520 systemd-networkd[1813]: cali78920951076: Gained IPv6LL Dec 12 17:30:08.705068 kubelet[3599]: E1212 17:30:08.703835 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:30:09.187930 (udev-worker)[4554]: Network interface NamePolicy= disabled on kernel command line. Dec 12 17:30:09.220729 systemd-networkd[1813]: vxlan.calico: Link UP Dec 12 17:30:09.220749 systemd-networkd[1813]: vxlan.calico: Gained carrier Dec 12 17:30:09.706044 kubelet[3599]: E1212 17:30:09.705954 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:30:10.243689 systemd[1]: Started sshd@7-172.31.24.26:22-147.75.109.163:50292.service - OpenSSH per-connection server daemon (147.75.109.163:50292). Dec 12 17:30:10.254068 containerd[1901]: time="2025-12-12T17:30:10.253931920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wkfjd,Uid:f9068c32-dbae-4f8b-8dc5-106c0f06bde7,Namespace:kube-system,Attempt:0,}" Dec 12 17:30:10.462169 sshd[4903]: Accepted publickey for core from 147.75.109.163 port 50292 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:10.469062 sshd-session[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:10.492485 systemd-logind[1874]: New session 8 of user core. Dec 12 17:30:10.507712 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:30:10.630772 systemd-networkd[1813]: caliaca1e1ebf82: Link UP Dec 12 17:30:10.631894 systemd-networkd[1813]: caliaca1e1ebf82: Gained carrier Dec 12 17:30:10.686131 containerd[1901]: 2025-12-12 17:30:10.419 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0 coredns-66bc5c9577- kube-system f9068c32-dbae-4f8b-8dc5-106c0f06bde7 870 0 2025-12-12 17:29:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-26 coredns-66bc5c9577-wkfjd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaca1e1ebf82 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Namespace="kube-system" Pod="coredns-66bc5c9577-wkfjd" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-" Dec 12 17:30:10.686131 containerd[1901]: 2025-12-12 17:30:10.420 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Namespace="kube-system" Pod="coredns-66bc5c9577-wkfjd" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" Dec 12 17:30:10.686131 containerd[1901]: 2025-12-12 17:30:10.537 [INFO][4920] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" HandleID="k8s-pod-network.57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Workload="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.538 [INFO][4920] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" HandleID="k8s-pod-network.57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Workload="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331c20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-26", "pod":"coredns-66bc5c9577-wkfjd", "timestamp":"2025-12-12 17:30:10.53755869 +0000 UTC"}, Hostname:"ip-172-31-24-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.538 [INFO][4920] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.538 [INFO][4920] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.538 [INFO][4920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-26' Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.563 [INFO][4920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" host="ip-172-31-24-26" Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.570 [INFO][4920] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-26" Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.578 [INFO][4920] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.581 [INFO][4920] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.585 [INFO][4920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:10.687166 containerd[1901]: 2025-12-12 17:30:10.585 [INFO][4920] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" host="ip-172-31-24-26" Dec 12 17:30:10.687725 containerd[1901]: 2025-12-12 17:30:10.588 [INFO][4920] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9 Dec 12 17:30:10.687725 containerd[1901]: 2025-12-12 17:30:10.596 [INFO][4920] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" host="ip-172-31-24-26" Dec 12 17:30:10.687725 containerd[1901]: 2025-12-12 17:30:10.615 [INFO][4920] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.194/26] block=192.168.1.192/26 handle="k8s-pod-network.57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" host="ip-172-31-24-26" Dec 12 17:30:10.687725 containerd[1901]: 2025-12-12 17:30:10.615 [INFO][4920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.194/26] handle="k8s-pod-network.57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" host="ip-172-31-24-26" Dec 12 17:30:10.687725 containerd[1901]: 2025-12-12 17:30:10.615 [INFO][4920] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:30:10.687725 containerd[1901]: 2025-12-12 17:30:10.616 [INFO][4920] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.194/26] IPv6=[] ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" HandleID="k8s-pod-network.57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Workload="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" Dec 12 17:30:10.688011 containerd[1901]: 2025-12-12 17:30:10.622 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Namespace="kube-system" Pod="coredns-66bc5c9577-wkfjd" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f9068c32-dbae-4f8b-8dc5-106c0f06bde7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"", Pod:"coredns-66bc5c9577-wkfjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaca1e1ebf82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:10.688011 containerd[1901]: 2025-12-12 17:30:10.622 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.194/32] ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Namespace="kube-system" Pod="coredns-66bc5c9577-wkfjd" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" Dec 12 17:30:10.688011 containerd[1901]: 2025-12-12 17:30:10.623 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaca1e1ebf82 ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Namespace="kube-system" Pod="coredns-66bc5c9577-wkfjd" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" Dec 12 17:30:10.688011 containerd[1901]: 2025-12-12 17:30:10.633 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Namespace="kube-system" Pod="coredns-66bc5c9577-wkfjd" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" Dec 12 17:30:10.688011 containerd[1901]: 2025-12-12 17:30:10.633 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Namespace="kube-system" Pod="coredns-66bc5c9577-wkfjd" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f9068c32-dbae-4f8b-8dc5-106c0f06bde7", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9", Pod:"coredns-66bc5c9577-wkfjd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaca1e1ebf82", MAC:"06:8b:02:37:d9:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:10.688011 containerd[1901]: 2025-12-12 17:30:10.679 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" Namespace="kube-system" Pod="coredns-66bc5c9577-wkfjd" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--wkfjd-eth0" Dec 12 17:30:10.792436 containerd[1901]: time="2025-12-12T17:30:10.791014831Z" level=info msg="connecting to shim 57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9" address="unix:///run/containerd/s/38421f32e2b6d167fb3f6d5296842d96f646e092cff56ac25ba820b90ca4c954" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:30:10.873077 systemd[1]: Started cri-containerd-57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9.scope - libcontainer container 57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9. Dec 12 17:30:10.956004 sshd[4927]: Connection closed by 147.75.109.163 port 50292 Dec 12 17:30:10.956925 sshd-session[4903]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:10.970452 systemd[1]: sshd@7-172.31.24.26:22-147.75.109.163:50292.service: Deactivated successfully. Dec 12 17:30:10.979992 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:30:10.984914 systemd-logind[1874]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:30:10.988696 systemd-logind[1874]: Removed session 8. Dec 12 17:30:11.000743 containerd[1901]: time="2025-12-12T17:30:11.000675964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wkfjd,Uid:f9068c32-dbae-4f8b-8dc5-106c0f06bde7,Namespace:kube-system,Attempt:0,} returns sandbox id \"57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9\"" Dec 12 17:30:11.013447 containerd[1901]: time="2025-12-12T17:30:11.013303864Z" level=info msg="CreateContainer within sandbox \"57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:30:11.040288 containerd[1901]: time="2025-12-12T17:30:11.040213372Z" level=info msg="Container 1f6469471ca617a409ae38e555cddb2ca87a02c2b33ed5fcf0f06376db4a7b6d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:30:11.060310 containerd[1901]: time="2025-12-12T17:30:11.059926792Z" level=info msg="CreateContainer within sandbox \"57c877cd774a028539b0ac13dca48e3a4094a2eebcd7aa687df6fedfe98dbab9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f6469471ca617a409ae38e555cddb2ca87a02c2b33ed5fcf0f06376db4a7b6d\"" Dec 12 17:30:11.061386 containerd[1901]: time="2025-12-12T17:30:11.061155172Z" level=info msg="StartContainer for \"1f6469471ca617a409ae38e555cddb2ca87a02c2b33ed5fcf0f06376db4a7b6d\"" Dec 12 17:30:11.063266 containerd[1901]: time="2025-12-12T17:30:11.063107776Z" level=info msg="connecting to shim 1f6469471ca617a409ae38e555cddb2ca87a02c2b33ed5fcf0f06376db4a7b6d" address="unix:///run/containerd/s/38421f32e2b6d167fb3f6d5296842d96f646e092cff56ac25ba820b90ca4c954" protocol=ttrpc version=3 Dec 12 17:30:11.101665 systemd[1]: Started cri-containerd-1f6469471ca617a409ae38e555cddb2ca87a02c2b33ed5fcf0f06376db4a7b6d.scope - libcontainer container 1f6469471ca617a409ae38e555cddb2ca87a02c2b33ed5fcf0f06376db4a7b6d. Dec 12 17:30:11.170027 containerd[1901]: time="2025-12-12T17:30:11.169812629Z" level=info msg="StartContainer for \"1f6469471ca617a409ae38e555cddb2ca87a02c2b33ed5fcf0f06376db4a7b6d\" returns successfully" Dec 12 17:30:11.175578 systemd-networkd[1813]: vxlan.calico: Gained IPv6LL Dec 12 17:30:11.255584 containerd[1901]: time="2025-12-12T17:30:11.255536213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c4f9478f-pv7hn,Uid:b78f3469-6603-4b67-beed-705184b4511e,Namespace:calico-system,Attempt:0,}" Dec 12 17:30:11.265040 containerd[1901]: time="2025-12-12T17:30:11.264577625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrkl4,Uid:1eddba4c-1bd5-4118-9720-635877fa49af,Namespace:kube-system,Attempt:0,}" Dec 12 17:30:11.683254 systemd-networkd[1813]: caliabd331616e3: Link UP Dec 12 17:30:11.686143 systemd-networkd[1813]: caliabd331616e3: Gained carrier Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.473 [INFO][5033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0 coredns-66bc5c9577- kube-system 1eddba4c-1bd5-4118-9720-635877fa49af 872 0 2025-12-12 17:29:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-26 coredns-66bc5c9577-nrkl4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliabd331616e3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Namespace="kube-system" Pod="coredns-66bc5c9577-nrkl4" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.475 [INFO][5033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Namespace="kube-system" Pod="coredns-66bc5c9577-nrkl4" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.570 [INFO][5057] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" HandleID="k8s-pod-network.bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Workload="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.571 [INFO][5057] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" HandleID="k8s-pod-network.bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Workload="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000321b90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-26", "pod":"coredns-66bc5c9577-nrkl4", "timestamp":"2025-12-12 17:30:11.570884671 +0000 UTC"}, Hostname:"ip-172-31-24-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.572 [INFO][5057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.572 [INFO][5057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.572 [INFO][5057] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-26' Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.601 [INFO][5057] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" host="ip-172-31-24-26" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.611 [INFO][5057] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-26" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.620 [INFO][5057] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.624 [INFO][5057] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.629 [INFO][5057] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.629 [INFO][5057] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" host="ip-172-31-24-26" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.633 [INFO][5057] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7 Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.644 [INFO][5057] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" host="ip-172-31-24-26" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.657 [INFO][5057] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.195/26] block=192.168.1.192/26 handle="k8s-pod-network.bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" host="ip-172-31-24-26" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.657 [INFO][5057] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.195/26] handle="k8s-pod-network.bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" host="ip-172-31-24-26" Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.657 [INFO][5057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:30:11.736995 containerd[1901]: 2025-12-12 17:30:11.657 [INFO][5057] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.195/26] IPv6=[] ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" HandleID="k8s-pod-network.bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Workload="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" Dec 12 17:30:11.738191 containerd[1901]: 2025-12-12 17:30:11.672 [INFO][5033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Namespace="kube-system" Pod="coredns-66bc5c9577-nrkl4" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1eddba4c-1bd5-4118-9720-635877fa49af", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"", Pod:"coredns-66bc5c9577-nrkl4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliabd331616e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:11.738191 containerd[1901]: 2025-12-12 17:30:11.673 [INFO][5033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.195/32] ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Namespace="kube-system" Pod="coredns-66bc5c9577-nrkl4" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" Dec 12 17:30:11.738191 containerd[1901]: 2025-12-12 17:30:11.673 [INFO][5033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabd331616e3 ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Namespace="kube-system" Pod="coredns-66bc5c9577-nrkl4" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" Dec 12 17:30:11.738191 containerd[1901]: 2025-12-12 17:30:11.690 [INFO][5033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Namespace="kube-system" Pod="coredns-66bc5c9577-nrkl4" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" Dec 12 17:30:11.738191 containerd[1901]: 2025-12-12 17:30:11.692 [INFO][5033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Namespace="kube-system" Pod="coredns-66bc5c9577-nrkl4" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1eddba4c-1bd5-4118-9720-635877fa49af", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7", Pod:"coredns-66bc5c9577-nrkl4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliabd331616e3", MAC:"ce:22:37:6c:6b:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:11.738191 containerd[1901]: 2025-12-12 17:30:11.728 [INFO][5033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" Namespace="kube-system" Pod="coredns-66bc5c9577-nrkl4" WorkloadEndpoint="ip--172--31--24--26-k8s-coredns--66bc5c9577--nrkl4-eth0" Dec 12 17:30:11.809530 containerd[1901]: time="2025-12-12T17:30:11.805582664Z" level=info msg="connecting to shim bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7" address="unix:///run/containerd/s/dc7354a36b7430cd8c820b20778a5758bd3cac17b0e4948a1d864e12f7fc6921" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:30:11.835343 kubelet[3599]: I1212 17:30:11.835227 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wkfjd" podStartSLOduration=52.835200356 podStartE2EDuration="52.835200356s" podCreationTimestamp="2025-12-12 17:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:30:11.75471848 +0000 UTC m=+54.914724982" watchObservedRunningTime="2025-12-12 17:30:11.835200356 +0000 UTC m=+54.995206858" Dec 12 17:30:11.918282 systemd[1]: Started cri-containerd-bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7.scope - libcontainer container bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7. Dec 12 17:30:11.929746 systemd-networkd[1813]: cali181fb111952: Link UP Dec 12 17:30:11.930170 systemd-networkd[1813]: cali181fb111952: Gained carrier Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.484 [INFO][5028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0 calico-kube-controllers-65c4f9478f- calico-system b78f3469-6603-4b67-beed-705184b4511e 869 0 2025-12-12 17:29:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65c4f9478f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-26 calico-kube-controllers-65c4f9478f-pv7hn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali181fb111952 [] [] }} ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Namespace="calico-system" Pod="calico-kube-controllers-65c4f9478f-pv7hn" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.484 [INFO][5028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Namespace="calico-system" Pod="calico-kube-controllers-65c4f9478f-pv7hn" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.603 [INFO][5062] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" HandleID="k8s-pod-network.77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Workload="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.604 [INFO][5062] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" HandleID="k8s-pod-network.77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Workload="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-26", "pod":"calico-kube-controllers-65c4f9478f-pv7hn", "timestamp":"2025-12-12 17:30:11.603876163 +0000 UTC"}, Hostname:"ip-172-31-24-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.604 [INFO][5062] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.657 [INFO][5062] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.658 [INFO][5062] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-26' Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.716 [INFO][5062] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" host="ip-172-31-24-26" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.742 [INFO][5062] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-26" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.756 [INFO][5062] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.773 [INFO][5062] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.790 [INFO][5062] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.790 [INFO][5062] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" host="ip-172-31-24-26" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.817 [INFO][5062] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.851 [INFO][5062] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" host="ip-172-31-24-26" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.882 [INFO][5062] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.196/26] block=192.168.1.192/26 handle="k8s-pod-network.77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" host="ip-172-31-24-26" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.883 [INFO][5062] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.196/26] handle="k8s-pod-network.77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" host="ip-172-31-24-26" Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.883 [INFO][5062] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:30:11.971115 containerd[1901]: 2025-12-12 17:30:11.883 [INFO][5062] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.196/26] IPv6=[] ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" HandleID="k8s-pod-network.77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Workload="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" Dec 12 17:30:11.973118 containerd[1901]: 2025-12-12 17:30:11.910 [INFO][5028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Namespace="calico-system" Pod="calico-kube-controllers-65c4f9478f-pv7hn" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0", GenerateName:"calico-kube-controllers-65c4f9478f-", Namespace:"calico-system", SelfLink:"", UID:"b78f3469-6603-4b67-beed-705184b4511e", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65c4f9478f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"", Pod:"calico-kube-controllers-65c4f9478f-pv7hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali181fb111952", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:11.973118 containerd[1901]: 2025-12-12 17:30:11.912 [INFO][5028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.196/32] ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Namespace="calico-system" Pod="calico-kube-controllers-65c4f9478f-pv7hn" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" Dec 12 17:30:11.973118 containerd[1901]: 2025-12-12 17:30:11.912 [INFO][5028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali181fb111952 ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Namespace="calico-system" Pod="calico-kube-controllers-65c4f9478f-pv7hn" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" Dec 12 17:30:11.973118 containerd[1901]: 2025-12-12 17:30:11.925 [INFO][5028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Namespace="calico-system" Pod="calico-kube-controllers-65c4f9478f-pv7hn" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" Dec 12 17:30:11.973118 containerd[1901]: 2025-12-12 17:30:11.925 [INFO][5028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Namespace="calico-system" Pod="calico-kube-controllers-65c4f9478f-pv7hn" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0", GenerateName:"calico-kube-controllers-65c4f9478f-", Namespace:"calico-system", SelfLink:"", UID:"b78f3469-6603-4b67-beed-705184b4511e", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65c4f9478f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a", Pod:"calico-kube-controllers-65c4f9478f-pv7hn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali181fb111952", MAC:"66:23:bf:49:1d:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:11.973118 containerd[1901]: 2025-12-12 17:30:11.962 [INFO][5028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" Namespace="calico-system" Pod="calico-kube-controllers-65c4f9478f-pv7hn" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--kube--controllers--65c4f9478f--pv7hn-eth0" Dec 12 17:30:12.072484 systemd-networkd[1813]: caliaca1e1ebf82: Gained IPv6LL Dec 12 17:30:12.087191 containerd[1901]: time="2025-12-12T17:30:12.087099689Z" level=info msg="connecting to shim 77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a" address="unix:///run/containerd/s/3eef52a51b7874264d8856ec294cc5404696bb0b52a7f7f680d384d29226cabb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:30:12.203799 systemd[1]: Started cri-containerd-77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a.scope - libcontainer container 77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a. Dec 12 17:30:12.253675 containerd[1901]: time="2025-12-12T17:30:12.253145610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bb58fbcd4-g9dtq,Uid:37d905b7-8baa-415e-b08a-01c4aafd5651,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:30:12.313880 containerd[1901]: time="2025-12-12T17:30:12.313802047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nrkl4,Uid:1eddba4c-1bd5-4118-9720-635877fa49af,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7\"" Dec 12 17:30:12.328771 containerd[1901]: time="2025-12-12T17:30:12.328714375Z" level=info msg="CreateContainer within sandbox \"bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:30:12.373668 containerd[1901]: time="2025-12-12T17:30:12.373573987Z" level=info msg="Container 4b8bbb5be2cd4f79617e1516588ed6126c5350c4fddd1826237d11040e9360c0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:30:12.389966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434802739.mount: Deactivated successfully. Dec 12 17:30:12.405344 containerd[1901]: time="2025-12-12T17:30:12.405219367Z" level=info msg="CreateContainer within sandbox \"bc6cb8af20dd2f9b0fc0eae5c8057017e3e68195e54d238c9490b4c09ff6d5e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b8bbb5be2cd4f79617e1516588ed6126c5350c4fddd1826237d11040e9360c0\"" Dec 12 17:30:12.407695 containerd[1901]: time="2025-12-12T17:30:12.407622991Z" level=info msg="StartContainer for \"4b8bbb5be2cd4f79617e1516588ed6126c5350c4fddd1826237d11040e9360c0\"" Dec 12 17:30:12.414370 containerd[1901]: time="2025-12-12T17:30:12.413839843Z" level=info msg="connecting to shim 4b8bbb5be2cd4f79617e1516588ed6126c5350c4fddd1826237d11040e9360c0" address="unix:///run/containerd/s/dc7354a36b7430cd8c820b20778a5758bd3cac17b0e4948a1d864e12f7fc6921" protocol=ttrpc version=3 Dec 12 17:30:12.515954 systemd[1]: Started cri-containerd-4b8bbb5be2cd4f79617e1516588ed6126c5350c4fddd1826237d11040e9360c0.scope - libcontainer container 4b8bbb5be2cd4f79617e1516588ed6126c5350c4fddd1826237d11040e9360c0. Dec 12 17:30:12.605814 containerd[1901]: time="2025-12-12T17:30:12.605567084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c4f9478f-pv7hn,Uid:b78f3469-6603-4b67-beed-705184b4511e,Namespace:calico-system,Attempt:0,} returns sandbox id \"77377eb5fd4e959a6c1b0d63dd7e9b784570b90c3698f89ff129b221f84fed0a\"" Dec 12 17:30:12.616584 containerd[1901]: time="2025-12-12T17:30:12.615939728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:30:12.720129 containerd[1901]: time="2025-12-12T17:30:12.720056997Z" level=info msg="StartContainer for \"4b8bbb5be2cd4f79617e1516588ed6126c5350c4fddd1826237d11040e9360c0\" returns successfully" Dec 12 17:30:12.760738 kubelet[3599]: I1212 17:30:12.760635 3599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nrkl4" podStartSLOduration=53.760584165 podStartE2EDuration="53.760584165s" podCreationTimestamp="2025-12-12 17:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:30:12.759155637 +0000 UTC m=+55.919162055" watchObservedRunningTime="2025-12-12 17:30:12.760584165 +0000 UTC m=+55.920590619" Dec 12 17:30:12.826409 systemd-networkd[1813]: calid51ec5f1ff0: Link UP Dec 12 17:30:12.835503 systemd-networkd[1813]: calid51ec5f1ff0: Gained carrier Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.541 [INFO][5179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0 calico-apiserver-6bb58fbcd4- calico-apiserver 37d905b7-8baa-415e-b08a-01c4aafd5651 871 0 2025-12-12 17:29:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bb58fbcd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-26 calico-apiserver-6bb58fbcd4-g9dtq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid51ec5f1ff0 [] [] }} ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-g9dtq" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.542 [INFO][5179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-g9dtq" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.702 [INFO][5217] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" HandleID="k8s-pod-network.c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Workload="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.702 [INFO][5217] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" HandleID="k8s-pod-network.c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Workload="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cd5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-26", "pod":"calico-apiserver-6bb58fbcd4-g9dtq", "timestamp":"2025-12-12 17:30:12.702033284 +0000 UTC"}, Hostname:"ip-172-31-24-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.702 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.702 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.703 [INFO][5217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-26' Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.734 [INFO][5217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" host="ip-172-31-24-26" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.747 [INFO][5217] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-26" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.759 [INFO][5217] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.768 [INFO][5217] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.782 [INFO][5217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.782 [INFO][5217] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" host="ip-172-31-24-26" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.785 [INFO][5217] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4 Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.794 [INFO][5217] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" host="ip-172-31-24-26" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.810 [INFO][5217] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.197/26] block=192.168.1.192/26 handle="k8s-pod-network.c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" host="ip-172-31-24-26" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.810 [INFO][5217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.197/26] handle="k8s-pod-network.c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" host="ip-172-31-24-26" Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.811 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:30:12.885971 containerd[1901]: 2025-12-12 17:30:12.811 [INFO][5217] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.197/26] IPv6=[] ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" HandleID="k8s-pod-network.c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Workload="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" Dec 12 17:30:12.888993 containerd[1901]: 2025-12-12 17:30:12.817 [INFO][5179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-g9dtq" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0", GenerateName:"calico-apiserver-6bb58fbcd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37d905b7-8baa-415e-b08a-01c4aafd5651", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bb58fbcd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"", Pod:"calico-apiserver-6bb58fbcd4-g9dtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid51ec5f1ff0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:12.888993 containerd[1901]: 2025-12-12 17:30:12.818 [INFO][5179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.197/32] ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-g9dtq" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" Dec 12 17:30:12.888993 containerd[1901]: 2025-12-12 17:30:12.818 [INFO][5179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid51ec5f1ff0 ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-g9dtq" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" Dec 12 17:30:12.888993 containerd[1901]: 2025-12-12 17:30:12.834 [INFO][5179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-g9dtq" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" Dec 12 17:30:12.888993 containerd[1901]: 2025-12-12 17:30:12.836 [INFO][5179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-g9dtq" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0", GenerateName:"calico-apiserver-6bb58fbcd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"37d905b7-8baa-415e-b08a-01c4aafd5651", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bb58fbcd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4", Pod:"calico-apiserver-6bb58fbcd4-g9dtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid51ec5f1ff0", MAC:"c6:fc:d0:3c:15:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:12.888993 containerd[1901]: 2025-12-12 17:30:12.877 [INFO][5179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-g9dtq" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--g9dtq-eth0" Dec 12 17:30:12.940881 containerd[1901]: time="2025-12-12T17:30:12.940448266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:12.944420 containerd[1901]: time="2025-12-12T17:30:12.943291666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:30:12.944883 containerd[1901]: time="2025-12-12T17:30:12.943369426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:30:12.945850 kubelet[3599]: E1212 17:30:12.945605 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:30:12.948145 kubelet[3599]: E1212 17:30:12.946044 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:30:12.948145 kubelet[3599]: E1212 17:30:12.946486 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65c4f9478f-pv7hn_calico-system(b78f3469-6603-4b67-beed-705184b4511e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:12.948145 kubelet[3599]: E1212 17:30:12.947079 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:30:12.960571 containerd[1901]: time="2025-12-12T17:30:12.960495442Z" level=info msg="connecting to shim c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4" address="unix:///run/containerd/s/5c2101aacad46ff414e5914231655b8100cbaa8fb3558a22861d8f0c0952762a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:30:13.037715 systemd[1]: Started cri-containerd-c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4.scope - libcontainer container c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4. Dec 12 17:30:13.246363 containerd[1901]: time="2025-12-12T17:30:13.246217327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bb58fbcd4-g9dtq,Uid:37d905b7-8baa-415e-b08a-01c4aafd5651,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c83d1e0abf29bd2bd7c951edf00cae3a2fb69e2dc4e1f3eb1e3c9b7de5ebe5c4\"" Dec 12 17:30:13.256131 containerd[1901]: time="2025-12-12T17:30:13.254967667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:30:13.258671 containerd[1901]: time="2025-12-12T17:30:13.258062071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bb58fbcd4-x7fhr,Uid:358ee8cb-07e7-4336-8448-2d22cafc7817,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:30:13.264223 containerd[1901]: time="2025-12-12T17:30:13.263521819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ljkxc,Uid:dfaeee63-32d9-4902-9d2a-576429123236,Namespace:calico-system,Attempt:0,}" Dec 12 17:30:13.542256 containerd[1901]: time="2025-12-12T17:30:13.542141277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:13.544814 containerd[1901]: time="2025-12-12T17:30:13.544667193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:30:13.544964 containerd[1901]: time="2025-12-12T17:30:13.544721529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:30:13.545282 kubelet[3599]: E1212 17:30:13.545230 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:13.545512 kubelet[3599]: E1212 17:30:13.545480 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:13.546545 kubelet[3599]: E1212 17:30:13.546496 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bb58fbcd4-g9dtq_calico-apiserver(37d905b7-8baa-415e-b08a-01c4aafd5651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:13.546808 kubelet[3599]: E1212 17:30:13.546763 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:30:13.575381 systemd-networkd[1813]: cali1566f1cb8a2: Link UP Dec 12 17:30:13.575871 systemd-networkd[1813]: cali1566f1cb8a2: Gained carrier Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.413 [INFO][5296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0 calico-apiserver-6bb58fbcd4- calico-apiserver 358ee8cb-07e7-4336-8448-2d22cafc7817 873 0 2025-12-12 17:29:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bb58fbcd4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-26 calico-apiserver-6bb58fbcd4-x7fhr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1566f1cb8a2 [] [] }} ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-x7fhr" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.414 [INFO][5296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-x7fhr" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.482 [INFO][5325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" HandleID="k8s-pod-network.e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Workload="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.483 [INFO][5325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" HandleID="k8s-pod-network.e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Workload="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-26", "pod":"calico-apiserver-6bb58fbcd4-x7fhr", "timestamp":"2025-12-12 17:30:13.482516372 +0000 UTC"}, Hostname:"ip-172-31-24-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.483 [INFO][5325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.483 [INFO][5325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.483 [INFO][5325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-26' Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.502 [INFO][5325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" host="ip-172-31-24-26" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.510 [INFO][5325] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-26" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.518 [INFO][5325] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.522 [INFO][5325] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.527 [INFO][5325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.527 [INFO][5325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" host="ip-172-31-24-26" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.532 [INFO][5325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.539 [INFO][5325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" host="ip-172-31-24-26" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.559 [INFO][5325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.198/26] block=192.168.1.192/26 handle="k8s-pod-network.e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" host="ip-172-31-24-26" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.559 [INFO][5325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.198/26] handle="k8s-pod-network.e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" host="ip-172-31-24-26" Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.560 [INFO][5325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:30:13.601998 containerd[1901]: 2025-12-12 17:30:13.560 [INFO][5325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.198/26] IPv6=[] ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" HandleID="k8s-pod-network.e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Workload="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" Dec 12 17:30:13.605246 containerd[1901]: 2025-12-12 17:30:13.568 [INFO][5296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-x7fhr" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0", GenerateName:"calico-apiserver-6bb58fbcd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"358ee8cb-07e7-4336-8448-2d22cafc7817", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bb58fbcd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"", Pod:"calico-apiserver-6bb58fbcd4-x7fhr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1566f1cb8a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:13.605246 containerd[1901]: 2025-12-12 17:30:13.568 [INFO][5296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.198/32] ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-x7fhr" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" Dec 12 17:30:13.605246 containerd[1901]: 2025-12-12 17:30:13.568 [INFO][5296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1566f1cb8a2 ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-x7fhr" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" Dec 12 17:30:13.605246 containerd[1901]: 2025-12-12 17:30:13.572 [INFO][5296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-x7fhr" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" Dec 12 17:30:13.605246 containerd[1901]: 2025-12-12 17:30:13.573 [INFO][5296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-x7fhr" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0", GenerateName:"calico-apiserver-6bb58fbcd4-", Namespace:"calico-apiserver", SelfLink:"", UID:"358ee8cb-07e7-4336-8448-2d22cafc7817", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bb58fbcd4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed", Pod:"calico-apiserver-6bb58fbcd4-x7fhr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1566f1cb8a2", MAC:"56:2d:5c:63:b8:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:13.605246 containerd[1901]: 2025-12-12 17:30:13.596 [INFO][5296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" Namespace="calico-apiserver" Pod="calico-apiserver-6bb58fbcd4-x7fhr" WorkloadEndpoint="ip--172--31--24--26-k8s-calico--apiserver--6bb58fbcd4--x7fhr-eth0" Dec 12 17:30:13.607889 systemd-networkd[1813]: caliabd331616e3: Gained IPv6LL Dec 12 17:30:13.672161 systemd-networkd[1813]: cali181fb111952: Gained IPv6LL Dec 12 17:30:13.695251 containerd[1901]: time="2025-12-12T17:30:13.694787685Z" level=info msg="connecting to shim e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed" address="unix:///run/containerd/s/20824472d86ccb3764fb6ab3adff364d02d589ee06b128e44d87e24b83301a9c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:30:13.748691 kubelet[3599]: E1212 17:30:13.748544 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:30:13.750578 kubelet[3599]: E1212 17:30:13.749075 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:30:13.764858 systemd-networkd[1813]: cali05b65ca406c: Link UP Dec 12 17:30:13.779657 systemd-networkd[1813]: cali05b65ca406c: Gained carrier Dec 12 17:30:13.834309 systemd[1]: Started cri-containerd-e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed.scope - libcontainer container e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed. Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.405 [INFO][5294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0 csi-node-driver- calico-system dfaeee63-32d9-4902-9d2a-576429123236 773 0 2025-12-12 17:29:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-26 csi-node-driver-ljkxc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali05b65ca406c [] [] }} ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Namespace="calico-system" Pod="csi-node-driver-ljkxc" WorkloadEndpoint="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.406 [INFO][5294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Namespace="calico-system" Pod="csi-node-driver-ljkxc" WorkloadEndpoint="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.488 [INFO][5319] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" HandleID="k8s-pod-network.f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Workload="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.490 [INFO][5319] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" HandleID="k8s-pod-network.f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Workload="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000320c80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-26", "pod":"csi-node-driver-ljkxc", "timestamp":"2025-12-12 17:30:13.488630996 +0000 UTC"}, Hostname:"ip-172-31-24-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.490 [INFO][5319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.560 [INFO][5319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.560 [INFO][5319] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-26' Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.603 [INFO][5319] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" host="ip-172-31-24-26" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.623 [INFO][5319] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-26" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.642 [INFO][5319] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.655 [INFO][5319] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.667 [INFO][5319] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.668 [INFO][5319] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" host="ip-172-31-24-26" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.675 [INFO][5319] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565 Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.695 [INFO][5319] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" host="ip-172-31-24-26" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.725 [INFO][5319] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.199/26] block=192.168.1.192/26 handle="k8s-pod-network.f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" host="ip-172-31-24-26" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.725 [INFO][5319] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.199/26] handle="k8s-pod-network.f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" host="ip-172-31-24-26" Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.725 [INFO][5319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:30:13.870342 containerd[1901]: 2025-12-12 17:30:13.726 [INFO][5319] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.199/26] IPv6=[] ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" HandleID="k8s-pod-network.f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Workload="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" Dec 12 17:30:13.871540 containerd[1901]: 2025-12-12 17:30:13.752 [INFO][5294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Namespace="calico-system" Pod="csi-node-driver-ljkxc" WorkloadEndpoint="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dfaeee63-32d9-4902-9d2a-576429123236", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"", Pod:"csi-node-driver-ljkxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05b65ca406c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:13.871540 containerd[1901]: 2025-12-12 17:30:13.753 [INFO][5294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.199/32] ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Namespace="calico-system" Pod="csi-node-driver-ljkxc" WorkloadEndpoint="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" Dec 12 17:30:13.871540 containerd[1901]: 2025-12-12 17:30:13.753 [INFO][5294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05b65ca406c ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Namespace="calico-system" Pod="csi-node-driver-ljkxc" WorkloadEndpoint="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" Dec 12 17:30:13.871540 containerd[1901]: 2025-12-12 17:30:13.780 [INFO][5294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Namespace="calico-system" Pod="csi-node-driver-ljkxc" WorkloadEndpoint="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" Dec 12 17:30:13.871540 containerd[1901]: 2025-12-12 17:30:13.789 [INFO][5294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Namespace="calico-system" Pod="csi-node-driver-ljkxc" WorkloadEndpoint="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dfaeee63-32d9-4902-9d2a-576429123236", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565", Pod:"csi-node-driver-ljkxc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05b65ca406c", MAC:"f2:63:92:51:54:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:13.871540 containerd[1901]: 2025-12-12 17:30:13.857 [INFO][5294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" Namespace="calico-system" Pod="csi-node-driver-ljkxc" WorkloadEndpoint="ip--172--31--24--26-k8s-csi--node--driver--ljkxc-eth0" Dec 12 17:30:13.947605 containerd[1901]: time="2025-12-12T17:30:13.947516279Z" level=info msg="connecting to shim f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565" address="unix:///run/containerd/s/a8c3a6aa35c961866c314269b2bbc438cb21e4f378bc1a9d9eeda6fc90e59886" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:30:14.005540 systemd[1]: Started cri-containerd-f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565.scope - libcontainer container f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565. Dec 12 17:30:14.075676 containerd[1901]: time="2025-12-12T17:30:14.075285703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bb58fbcd4-x7fhr,Uid:358ee8cb-07e7-4336-8448-2d22cafc7817,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e6bc0aa36a14505662f2e44c79ee14fa85954bfe9eb5c25e8b8bc1e2a47639ed\"" Dec 12 17:30:14.082335 containerd[1901]: time="2025-12-12T17:30:14.081530059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:30:14.108908 containerd[1901]: time="2025-12-12T17:30:14.108234919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ljkxc,Uid:dfaeee63-32d9-4902-9d2a-576429123236,Namespace:calico-system,Attempt:0,} returns sandbox id \"f4113008acd4b126e08f793df2303fa72fb69f60bd1e822f9f91ea1ed8710565\"" Dec 12 17:30:14.253217 containerd[1901]: time="2025-12-12T17:30:14.253139708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-224vp,Uid:68e53e1a-54da-4cf3-b329-4a29532261fd,Namespace:calico-system,Attempt:0,}" Dec 12 17:30:14.311519 systemd-networkd[1813]: calid51ec5f1ff0: Gained IPv6LL Dec 12 17:30:14.363886 containerd[1901]: time="2025-12-12T17:30:14.363412233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:14.366395 containerd[1901]: time="2025-12-12T17:30:14.365698473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:30:14.366395 containerd[1901]: time="2025-12-12T17:30:14.365737053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:30:14.366615 kubelet[3599]: E1212 17:30:14.366127 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:14.366615 kubelet[3599]: E1212 17:30:14.366193 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:14.367982 kubelet[3599]: E1212 17:30:14.367139 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bb58fbcd4-x7fhr_calico-apiserver(358ee8cb-07e7-4336-8448-2d22cafc7817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:14.367982 kubelet[3599]: E1212 17:30:14.367207 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:30:14.369747 containerd[1901]: time="2025-12-12T17:30:14.369664761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:30:14.648627 systemd-networkd[1813]: calie077d9c5c50: Link UP Dec 12 17:30:14.651984 systemd-networkd[1813]: calie077d9c5c50: Gained carrier Dec 12 17:30:14.662372 containerd[1901]: time="2025-12-12T17:30:14.661640410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:14.666643 containerd[1901]: time="2025-12-12T17:30:14.666478366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:30:14.666643 containerd[1901]: time="2025-12-12T17:30:14.666543046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:30:14.672953 kubelet[3599]: E1212 17:30:14.669354 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:30:14.672953 kubelet[3599]: E1212 17:30:14.669450 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:30:14.672953 kubelet[3599]: E1212 17:30:14.669572 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:14.673234 containerd[1901]: time="2025-12-12T17:30:14.673038322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.387 [INFO][5449] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0 goldmane-7c778bb748- calico-system 68e53e1a-54da-4cf3-b329-4a29532261fd 874 0 2025-12-12 17:29:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-26 goldmane-7c778bb748-224vp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie077d9c5c50 [] [] }} ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Namespace="calico-system" Pod="goldmane-7c778bb748-224vp" WorkloadEndpoint="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.388 [INFO][5449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Namespace="calico-system" Pod="goldmane-7c778bb748-224vp" WorkloadEndpoint="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.500 [INFO][5461] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" HandleID="k8s-pod-network.1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Workload="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.501 [INFO][5461] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" HandleID="k8s-pod-network.1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Workload="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d950), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-26", "pod":"goldmane-7c778bb748-224vp", "timestamp":"2025-12-12 17:30:14.500823993 +0000 UTC"}, Hostname:"ip-172-31-24-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.501 [INFO][5461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.501 [INFO][5461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.501 [INFO][5461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-26' Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.522 [INFO][5461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" host="ip-172-31-24-26" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.543 [INFO][5461] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-26" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.556 [INFO][5461] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.564 [INFO][5461] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.571 [INFO][5461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ip-172-31-24-26" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.571 [INFO][5461] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" host="ip-172-31-24-26" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.588 [INFO][5461] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.597 [INFO][5461] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" host="ip-172-31-24-26" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.637 [INFO][5461] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.200/26] block=192.168.1.192/26 handle="k8s-pod-network.1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" host="ip-172-31-24-26" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.637 [INFO][5461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.200/26] handle="k8s-pod-network.1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" host="ip-172-31-24-26" Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.637 [INFO][5461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:30:14.698945 containerd[1901]: 2025-12-12 17:30:14.637 [INFO][5461] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.200/26] IPv6=[] ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" HandleID="k8s-pod-network.1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Workload="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" Dec 12 17:30:14.701030 containerd[1901]: 2025-12-12 17:30:14.642 [INFO][5449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Namespace="calico-system" Pod="goldmane-7c778bb748-224vp" WorkloadEndpoint="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"68e53e1a-54da-4cf3-b329-4a29532261fd", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"", Pod:"goldmane-7c778bb748-224vp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie077d9c5c50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:14.701030 containerd[1901]: 2025-12-12 17:30:14.642 [INFO][5449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.200/32] ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Namespace="calico-system" Pod="goldmane-7c778bb748-224vp" WorkloadEndpoint="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" Dec 12 17:30:14.701030 containerd[1901]: 2025-12-12 17:30:14.642 [INFO][5449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie077d9c5c50 ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Namespace="calico-system" Pod="goldmane-7c778bb748-224vp" WorkloadEndpoint="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" Dec 12 17:30:14.701030 containerd[1901]: 2025-12-12 17:30:14.651 [INFO][5449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Namespace="calico-system" Pod="goldmane-7c778bb748-224vp" WorkloadEndpoint="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" Dec 12 17:30:14.701030 containerd[1901]: 2025-12-12 17:30:14.656 [INFO][5449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Namespace="calico-system" Pod="goldmane-7c778bb748-224vp" WorkloadEndpoint="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"68e53e1a-54da-4cf3-b329-4a29532261fd", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 29, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-26", ContainerID:"1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a", Pod:"goldmane-7c778bb748-224vp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie077d9c5c50", MAC:"56:66:00:ef:ac:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:30:14.701030 containerd[1901]: 2025-12-12 17:30:14.688 [INFO][5449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" Namespace="calico-system" Pod="goldmane-7c778bb748-224vp" WorkloadEndpoint="ip--172--31--24--26-k8s-goldmane--7c778bb748--224vp-eth0" Dec 12 17:30:14.774091 kubelet[3599]: E1212 17:30:14.773871 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:30:14.780714 kubelet[3599]: E1212 17:30:14.780591 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:30:14.822176 containerd[1901]: time="2025-12-12T17:30:14.822056207Z" level=info msg="connecting to shim 1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a" address="unix:///run/containerd/s/37c40a9386ba5bdab81d706ca5c5fb8f4f00543c9a7671222ab3dc2021397f8e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:30:14.927532 systemd[1]: Started cri-containerd-1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a.scope - libcontainer container 1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a. Dec 12 17:30:14.952549 systemd-networkd[1813]: cali1566f1cb8a2: Gained IPv6LL Dec 12 17:30:14.971664 containerd[1901]: time="2025-12-12T17:30:14.971515536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:14.975129 containerd[1901]: time="2025-12-12T17:30:14.975037788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:30:14.975726 containerd[1901]: time="2025-12-12T17:30:14.975184464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:30:14.976618 kubelet[3599]: E1212 17:30:14.976494 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:30:14.976803 kubelet[3599]: E1212 17:30:14.976596 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:30:14.977613 kubelet[3599]: E1212 17:30:14.977475 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:14.979457 kubelet[3599]: E1212 17:30:14.977819 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:30:15.079617 systemd-networkd[1813]: cali05b65ca406c: Gained IPv6LL Dec 12 17:30:15.145033 containerd[1901]: time="2025-12-12T17:30:15.144975513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-224vp,Uid:68e53e1a-54da-4cf3-b329-4a29532261fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"1a6c1f8b4c1f4d9bc1c171fc956b91d8dbbeb8e0d964b37e6feda6a9ada3db0a\"" Dec 12 17:30:15.149841 containerd[1901]: time="2025-12-12T17:30:15.149769933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:30:15.444614 containerd[1901]: time="2025-12-12T17:30:15.444472786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:15.447269 containerd[1901]: time="2025-12-12T17:30:15.447076462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:30:15.447269 containerd[1901]: time="2025-12-12T17:30:15.447220906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:30:15.448348 kubelet[3599]: E1212 17:30:15.447682 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:30:15.448348 kubelet[3599]: E1212 17:30:15.447762 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:30:15.448348 kubelet[3599]: E1212 17:30:15.447874 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-224vp_calico-system(68e53e1a-54da-4cf3-b329-4a29532261fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:15.448348 kubelet[3599]: E1212 17:30:15.447928 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:30:15.780692 kubelet[3599]: E1212 17:30:15.780099 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:30:15.780692 kubelet[3599]: E1212 17:30:15.780281 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:30:15.782779 kubelet[3599]: E1212 17:30:15.782710 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:30:15.912543 systemd-networkd[1813]: calie077d9c5c50: Gained IPv6LL Dec 12 17:30:15.993912 systemd[1]: Started sshd@8-172.31.24.26:22-147.75.109.163:59610.service - OpenSSH per-connection server daemon (147.75.109.163:59610). Dec 12 17:30:16.186763 sshd[5533]: Accepted publickey for core from 147.75.109.163 port 59610 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:16.192530 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:16.203087 systemd-logind[1874]: New session 9 of user core. Dec 12 17:30:16.215587 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:30:16.483243 sshd[5536]: Connection closed by 147.75.109.163 port 59610 Dec 12 17:30:16.484036 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:16.492056 systemd[1]: sshd@8-172.31.24.26:22-147.75.109.163:59610.service: Deactivated successfully. Dec 12 17:30:16.497631 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:30:16.499768 systemd-logind[1874]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:30:16.504587 systemd-logind[1874]: Removed session 9. Dec 12 17:30:16.782604 kubelet[3599]: E1212 17:30:16.782508 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:30:18.280195 ntpd[2031]: Listen normally on 6 vxlan.calico 192.168.1.192:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 6 vxlan.calico 192.168.1.192:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 7 cali78920951076 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 8 vxlan.calico [fe80::64d7:d0ff:fe13:986c%5]:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 9 caliaca1e1ebf82 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 10 caliabd331616e3 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 11 cali181fb111952 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 12 calid51ec5f1ff0 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 13 cali1566f1cb8a2 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 14 cali05b65ca406c [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 17:30:18.281207 ntpd[2031]: 12 Dec 17:30:18 ntpd[2031]: Listen normally on 15 calie077d9c5c50 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 17:30:18.280276 ntpd[2031]: Listen normally on 7 cali78920951076 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 12 17:30:18.280362 ntpd[2031]: Listen normally on 8 vxlan.calico [fe80::64d7:d0ff:fe13:986c%5]:123 Dec 12 17:30:18.280414 ntpd[2031]: Listen normally on 9 caliaca1e1ebf82 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 12 17:30:18.280459 ntpd[2031]: Listen normally on 10 caliabd331616e3 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 12 17:30:18.280502 ntpd[2031]: Listen normally on 11 cali181fb111952 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 12 17:30:18.280554 ntpd[2031]: Listen normally on 12 calid51ec5f1ff0 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 12 17:30:18.280598 ntpd[2031]: Listen normally on 13 cali1566f1cb8a2 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 12 17:30:18.280641 ntpd[2031]: Listen normally on 14 cali05b65ca406c [fe80::ecee:eeff:feee:eeee%13]:123 Dec 12 17:30:18.280686 ntpd[2031]: Listen normally on 15 calie077d9c5c50 [fe80::ecee:eeff:feee:eeee%14]:123 Dec 12 17:30:21.523778 systemd[1]: Started sshd@9-172.31.24.26:22-147.75.109.163:59614.service - OpenSSH per-connection server daemon (147.75.109.163:59614). Dec 12 17:30:21.713849 sshd[5563]: Accepted publickey for core from 147.75.109.163 port 59614 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:21.716747 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:21.724513 systemd-logind[1874]: New session 10 of user core. Dec 12 17:30:21.730669 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:30:21.993515 sshd[5566]: Connection closed by 147.75.109.163 port 59614 Dec 12 17:30:21.994744 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:22.001636 systemd-logind[1874]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:30:22.002117 systemd[1]: sshd@9-172.31.24.26:22-147.75.109.163:59614.service: Deactivated successfully. Dec 12 17:30:22.006898 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:30:22.029781 systemd-logind[1874]: Removed session 10. Dec 12 17:30:22.030935 systemd[1]: Started sshd@10-172.31.24.26:22-147.75.109.163:59622.service - OpenSSH per-connection server daemon (147.75.109.163:59622). Dec 12 17:30:22.224493 sshd[5581]: Accepted publickey for core from 147.75.109.163 port 59622 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:22.227145 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:22.235819 systemd-logind[1874]: New session 11 of user core. Dec 12 17:30:22.244667 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:30:22.593613 sshd[5584]: Connection closed by 147.75.109.163 port 59622 Dec 12 17:30:22.594952 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:22.609554 systemd-logind[1874]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:30:22.612001 systemd[1]: sshd@10-172.31.24.26:22-147.75.109.163:59622.service: Deactivated successfully. Dec 12 17:30:22.621736 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:30:22.656424 systemd-logind[1874]: Removed session 11. Dec 12 17:30:22.657110 systemd[1]: Started sshd@11-172.31.24.26:22-147.75.109.163:39054.service - OpenSSH per-connection server daemon (147.75.109.163:39054). Dec 12 17:30:22.847899 sshd[5594]: Accepted publickey for core from 147.75.109.163 port 39054 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:22.850544 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:22.859922 systemd-logind[1874]: New session 12 of user core. Dec 12 17:30:22.864638 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:30:23.158454 sshd[5597]: Connection closed by 147.75.109.163 port 39054 Dec 12 17:30:23.156934 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:23.167608 systemd[1]: sshd@11-172.31.24.26:22-147.75.109.163:39054.service: Deactivated successfully. Dec 12 17:30:23.173063 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:30:23.176901 systemd-logind[1874]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:30:23.179669 systemd-logind[1874]: Removed session 12. Dec 12 17:30:25.254723 containerd[1901]: time="2025-12-12T17:30:25.254644879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:30:25.514745 containerd[1901]: time="2025-12-12T17:30:25.514661936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:25.517237 containerd[1901]: time="2025-12-12T17:30:25.517113008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:30:25.517237 containerd[1901]: time="2025-12-12T17:30:25.517195388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:30:25.517786 kubelet[3599]: E1212 17:30:25.517686 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:30:25.519371 kubelet[3599]: E1212 17:30:25.517859 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:30:25.519371 kubelet[3599]: E1212 17:30:25.518097 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6ccf855d9b-zb2xt_calico-system(9da8aa8d-66f3-492c-808d-d01d872ee6b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:25.520121 containerd[1901]: time="2025-12-12T17:30:25.519879800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:30:25.790451 containerd[1901]: time="2025-12-12T17:30:25.790200310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:25.792968 containerd[1901]: time="2025-12-12T17:30:25.792823474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:30:25.792968 containerd[1901]: time="2025-12-12T17:30:25.792891430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:30:25.794344 kubelet[3599]: E1212 17:30:25.793383 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:30:25.794344 kubelet[3599]: E1212 17:30:25.793450 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:30:25.794344 kubelet[3599]: E1212 17:30:25.793781 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65c4f9478f-pv7hn_calico-system(b78f3469-6603-4b67-beed-705184b4511e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:25.794344 kubelet[3599]: E1212 17:30:25.793844 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:30:25.794773 containerd[1901]: time="2025-12-12T17:30:25.794101558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:30:26.052478 containerd[1901]: time="2025-12-12T17:30:26.052285315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:26.054900 containerd[1901]: time="2025-12-12T17:30:26.054802711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:30:26.055634 containerd[1901]: time="2025-12-12T17:30:26.054870403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:30:26.055725 kubelet[3599]: E1212 17:30:26.055143 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:30:26.055725 kubelet[3599]: E1212 17:30:26.055214 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:30:26.055725 kubelet[3599]: E1212 17:30:26.055371 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6ccf855d9b-zb2xt_calico-system(9da8aa8d-66f3-492c-808d-d01d872ee6b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:26.055925 kubelet[3599]: E1212 17:30:26.055461 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:30:26.248542 containerd[1901]: time="2025-12-12T17:30:26.248486288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:30:26.479798 containerd[1901]: time="2025-12-12T17:30:26.479514069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:26.481865 containerd[1901]: time="2025-12-12T17:30:26.481779441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:30:26.482108 containerd[1901]: time="2025-12-12T17:30:26.481924449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:30:26.482673 kubelet[3599]: E1212 17:30:26.482386 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:26.482673 kubelet[3599]: E1212 17:30:26.482448 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:26.482673 kubelet[3599]: E1212 17:30:26.482566 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bb58fbcd4-g9dtq_calico-apiserver(37d905b7-8baa-415e-b08a-01c4aafd5651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:26.482673 kubelet[3599]: E1212 17:30:26.482619 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:30:28.196068 systemd[1]: Started sshd@12-172.31.24.26:22-147.75.109.163:39066.service - OpenSSH per-connection server daemon (147.75.109.163:39066). Dec 12 17:30:28.251569 containerd[1901]: time="2025-12-12T17:30:28.251508226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:30:28.394794 sshd[5611]: Accepted publickey for core from 147.75.109.163 port 39066 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:28.395411 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:28.410699 systemd-logind[1874]: New session 13 of user core. Dec 12 17:30:28.426588 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:30:28.540686 containerd[1901]: time="2025-12-12T17:30:28.540489587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:28.544244 containerd[1901]: time="2025-12-12T17:30:28.544158551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:30:28.544444 containerd[1901]: time="2025-12-12T17:30:28.544295015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:30:28.544991 kubelet[3599]: E1212 17:30:28.544928 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:28.545530 kubelet[3599]: E1212 17:30:28.545123 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:28.546689 kubelet[3599]: E1212 17:30:28.546552 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bb58fbcd4-x7fhr_calico-apiserver(358ee8cb-07e7-4336-8448-2d22cafc7817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:28.547603 kubelet[3599]: E1212 17:30:28.547519 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:30:28.705457 sshd[5614]: Connection closed by 147.75.109.163 port 39066 Dec 12 17:30:28.706767 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:28.719181 systemd[1]: sshd@12-172.31.24.26:22-147.75.109.163:39066.service: Deactivated successfully. Dec 12 17:30:28.725145 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:30:28.729434 systemd-logind[1874]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:30:28.734035 systemd-logind[1874]: Removed session 13. Dec 12 17:30:29.261083 containerd[1901]: time="2025-12-12T17:30:29.261030551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:30:29.563221 containerd[1901]: time="2025-12-12T17:30:29.563001396Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:29.566026 containerd[1901]: time="2025-12-12T17:30:29.565959672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:30:29.566147 containerd[1901]: time="2025-12-12T17:30:29.566079924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:30:29.566397 kubelet[3599]: E1212 17:30:29.566307 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:30:29.567999 kubelet[3599]: E1212 17:30:29.566402 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:30:29.567999 kubelet[3599]: E1212 17:30:29.566528 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:29.569470 containerd[1901]: time="2025-12-12T17:30:29.569377716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:30:29.850613 containerd[1901]: time="2025-12-12T17:30:29.850461818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:29.854300 containerd[1901]: time="2025-12-12T17:30:29.854141042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:30:29.854492 containerd[1901]: time="2025-12-12T17:30:29.854169962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:30:29.854863 kubelet[3599]: E1212 17:30:29.854815 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:30:29.855034 kubelet[3599]: E1212 17:30:29.855005 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:30:29.855437 kubelet[3599]: E1212 17:30:29.855361 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:29.855663 kubelet[3599]: E1212 17:30:29.855603 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:30:32.250346 containerd[1901]: time="2025-12-12T17:30:32.249904106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:30:32.530232 containerd[1901]: time="2025-12-12T17:30:32.530169027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:32.532742 containerd[1901]: time="2025-12-12T17:30:32.532673247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:30:32.532852 containerd[1901]: time="2025-12-12T17:30:32.532792107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:30:32.533049 kubelet[3599]: E1212 17:30:32.532985 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:30:32.533844 kubelet[3599]: E1212 17:30:32.533056 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:30:32.533844 kubelet[3599]: E1212 17:30:32.533163 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-224vp_calico-system(68e53e1a-54da-4cf3-b329-4a29532261fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:32.533844 kubelet[3599]: E1212 17:30:32.533215 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:30:33.748809 systemd[1]: Started sshd@13-172.31.24.26:22-147.75.109.163:60398.service - OpenSSH per-connection server daemon (147.75.109.163:60398). Dec 12 17:30:33.934454 sshd[5635]: Accepted publickey for core from 147.75.109.163 port 60398 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:33.937123 sshd-session[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:33.946783 systemd-logind[1874]: New session 14 of user core. Dec 12 17:30:33.952611 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:30:34.221078 sshd[5638]: Connection closed by 147.75.109.163 port 60398 Dec 12 17:30:34.222156 sshd-session[5635]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:34.229937 systemd-logind[1874]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:30:34.231279 systemd[1]: sshd@13-172.31.24.26:22-147.75.109.163:60398.service: Deactivated successfully. Dec 12 17:30:34.236892 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:30:34.241726 systemd-logind[1874]: Removed session 14. Dec 12 17:30:38.249556 kubelet[3599]: E1212 17:30:38.249470 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:30:38.252990 kubelet[3599]: E1212 17:30:38.252152 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:30:39.257058 kubelet[3599]: E1212 17:30:39.255367 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:30:39.268276 systemd[1]: Started sshd@14-172.31.24.26:22-147.75.109.163:60408.service - OpenSSH per-connection server daemon (147.75.109.163:60408). Dec 12 17:30:39.473158 sshd[5677]: Accepted publickey for core from 147.75.109.163 port 60408 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:39.475113 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:39.484406 systemd-logind[1874]: New session 15 of user core. Dec 12 17:30:39.490622 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:30:39.756444 sshd[5680]: Connection closed by 147.75.109.163 port 60408 Dec 12 17:30:39.755123 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:39.763731 systemd[1]: sshd@14-172.31.24.26:22-147.75.109.163:60408.service: Deactivated successfully. Dec 12 17:30:39.768168 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:30:39.772151 systemd-logind[1874]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:30:39.776036 systemd-logind[1874]: Removed session 15. Dec 12 17:30:40.249100 kubelet[3599]: E1212 17:30:40.248902 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:30:43.255369 kubelet[3599]: E1212 17:30:43.254560 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:30:44.798186 systemd[1]: Started sshd@15-172.31.24.26:22-147.75.109.163:49054.service - OpenSSH per-connection server daemon (147.75.109.163:49054). Dec 12 17:30:44.995118 sshd[5695]: Accepted publickey for core from 147.75.109.163 port 49054 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:44.998961 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:45.009823 systemd-logind[1874]: New session 16 of user core. Dec 12 17:30:45.019643 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:30:45.285654 sshd[5698]: Connection closed by 147.75.109.163 port 49054 Dec 12 17:30:45.286782 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:45.295176 systemd[1]: sshd@15-172.31.24.26:22-147.75.109.163:49054.service: Deactivated successfully. Dec 12 17:30:45.299839 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:30:45.303393 systemd-logind[1874]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:30:45.305938 systemd-logind[1874]: Removed session 16. Dec 12 17:30:45.324885 systemd[1]: Started sshd@16-172.31.24.26:22-147.75.109.163:49064.service - OpenSSH per-connection server daemon (147.75.109.163:49064). Dec 12 17:30:45.513668 sshd[5710]: Accepted publickey for core from 147.75.109.163 port 49064 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:45.516125 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:45.523915 systemd-logind[1874]: New session 17 of user core. Dec 12 17:30:45.530575 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:30:45.972923 sshd[5713]: Connection closed by 147.75.109.163 port 49064 Dec 12 17:30:45.974147 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:45.982359 systemd[1]: sshd@16-172.31.24.26:22-147.75.109.163:49064.service: Deactivated successfully. Dec 12 17:30:45.987176 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:30:45.989481 systemd-logind[1874]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:30:45.992865 systemd-logind[1874]: Removed session 17. Dec 12 17:30:46.012804 systemd[1]: Started sshd@17-172.31.24.26:22-147.75.109.163:49072.service - OpenSSH per-connection server daemon (147.75.109.163:49072). Dec 12 17:30:46.214625 sshd[5723]: Accepted publickey for core from 147.75.109.163 port 49072 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:46.217127 sshd-session[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:46.226414 systemd-logind[1874]: New session 18 of user core. Dec 12 17:30:46.234581 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:30:47.225595 sshd[5726]: Connection closed by 147.75.109.163 port 49072 Dec 12 17:30:47.226162 sshd-session[5723]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:47.238974 systemd[1]: sshd@17-172.31.24.26:22-147.75.109.163:49072.service: Deactivated successfully. Dec 12 17:30:47.250964 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:30:47.260048 systemd-logind[1874]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:30:47.261546 kubelet[3599]: E1212 17:30:47.260609 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:30:47.283746 systemd[1]: Started sshd@18-172.31.24.26:22-147.75.109.163:49086.service - OpenSSH per-connection server daemon (147.75.109.163:49086). Dec 12 17:30:47.292029 systemd-logind[1874]: Removed session 18. Dec 12 17:30:47.516553 sshd[5744]: Accepted publickey for core from 147.75.109.163 port 49086 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:47.518946 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:47.528665 systemd-logind[1874]: New session 19 of user core. Dec 12 17:30:47.532594 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:30:48.162468 sshd[5747]: Connection closed by 147.75.109.163 port 49086 Dec 12 17:30:48.163982 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:48.174225 systemd-logind[1874]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:30:48.177004 systemd[1]: sshd@18-172.31.24.26:22-147.75.109.163:49086.service: Deactivated successfully. Dec 12 17:30:48.185616 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:30:48.213821 systemd-logind[1874]: Removed session 19. Dec 12 17:30:48.219785 systemd[1]: Started sshd@19-172.31.24.26:22-147.75.109.163:49088.service - OpenSSH per-connection server daemon (147.75.109.163:49088). Dec 12 17:30:48.420184 sshd[5757]: Accepted publickey for core from 147.75.109.163 port 49088 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:48.423034 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:48.430706 systemd-logind[1874]: New session 20 of user core. Dec 12 17:30:48.443562 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:30:48.686086 sshd[5762]: Connection closed by 147.75.109.163 port 49088 Dec 12 17:30:48.686529 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:48.693432 systemd-logind[1874]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:30:48.695672 systemd[1]: sshd@19-172.31.24.26:22-147.75.109.163:49088.service: Deactivated successfully. Dec 12 17:30:48.701560 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:30:48.706611 systemd-logind[1874]: Removed session 20. Dec 12 17:30:49.251154 containerd[1901]: time="2025-12-12T17:30:49.251084166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:30:49.575881 containerd[1901]: time="2025-12-12T17:30:49.575812004Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:49.578240 containerd[1901]: time="2025-12-12T17:30:49.578132960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:30:49.578600 containerd[1901]: time="2025-12-12T17:30:49.578294432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:30:49.579340 kubelet[3599]: E1212 17:30:49.578768 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:49.579340 kubelet[3599]: E1212 17:30:49.578831 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:49.579340 kubelet[3599]: E1212 17:30:49.578951 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bb58fbcd4-g9dtq_calico-apiserver(37d905b7-8baa-415e-b08a-01c4aafd5651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:49.579340 kubelet[3599]: E1212 17:30:49.579001 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:30:51.253563 containerd[1901]: time="2025-12-12T17:30:51.251303756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:30:51.547135 containerd[1901]: time="2025-12-12T17:30:51.547081569Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:51.551531 containerd[1901]: time="2025-12-12T17:30:51.551380137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:30:51.551931 containerd[1901]: time="2025-12-12T17:30:51.551441397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:30:51.552280 kubelet[3599]: E1212 17:30:51.552209 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:30:51.554388 kubelet[3599]: E1212 17:30:51.554092 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:30:51.555545 kubelet[3599]: E1212 17:30:51.554383 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6ccf855d9b-zb2xt_calico-system(9da8aa8d-66f3-492c-808d-d01d872ee6b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:51.555623 containerd[1901]: time="2025-12-12T17:30:51.554819337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:30:51.831763 containerd[1901]: time="2025-12-12T17:30:51.831236003Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:51.834898 containerd[1901]: time="2025-12-12T17:30:51.834780107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:30:51.834898 containerd[1901]: time="2025-12-12T17:30:51.834857567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:30:51.835293 kubelet[3599]: E1212 17:30:51.835201 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:30:51.835403 kubelet[3599]: E1212 17:30:51.835292 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:30:51.836212 kubelet[3599]: E1212 17:30:51.835809 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65c4f9478f-pv7hn_calico-system(b78f3469-6603-4b67-beed-705184b4511e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:51.836212 kubelet[3599]: E1212 17:30:51.835875 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:30:51.836877 containerd[1901]: time="2025-12-12T17:30:51.836134859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:30:52.113510 containerd[1901]: time="2025-12-12T17:30:52.112915208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:52.115300 containerd[1901]: time="2025-12-12T17:30:52.115199660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:30:52.115300 containerd[1901]: time="2025-12-12T17:30:52.115213172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:30:52.115652 kubelet[3599]: E1212 17:30:52.115568 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:30:52.115728 kubelet[3599]: E1212 17:30:52.115656 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:30:52.115804 kubelet[3599]: E1212 17:30:52.115762 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6ccf855d9b-zb2xt_calico-system(9da8aa8d-66f3-492c-808d-d01d872ee6b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:52.115866 kubelet[3599]: E1212 17:30:52.115829 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:30:53.726091 systemd[1]: Started sshd@20-172.31.24.26:22-147.75.109.163:36610.service - OpenSSH per-connection server daemon (147.75.109.163:36610). Dec 12 17:30:53.922408 sshd[5785]: Accepted publickey for core from 147.75.109.163 port 36610 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:53.925449 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:53.933711 systemd-logind[1874]: New session 21 of user core. Dec 12 17:30:53.946569 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:30:54.215361 sshd[5788]: Connection closed by 147.75.109.163 port 36610 Dec 12 17:30:54.215585 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:54.223448 systemd-logind[1874]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:30:54.225286 systemd[1]: sshd@20-172.31.24.26:22-147.75.109.163:36610.service: Deactivated successfully. Dec 12 17:30:54.229519 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:30:54.235725 systemd-logind[1874]: Removed session 21. Dec 12 17:30:54.251836 containerd[1901]: time="2025-12-12T17:30:54.251765543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:30:54.547308 containerd[1901]: time="2025-12-12T17:30:54.547227120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:54.550534 containerd[1901]: time="2025-12-12T17:30:54.550438032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:30:54.550807 containerd[1901]: time="2025-12-12T17:30:54.550449300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:30:54.550899 kubelet[3599]: E1212 17:30:54.550769 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:30:54.550899 kubelet[3599]: E1212 17:30:54.550847 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:30:54.552058 kubelet[3599]: E1212 17:30:54.551074 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:54.552131 containerd[1901]: time="2025-12-12T17:30:54.551697372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:30:54.843359 containerd[1901]: time="2025-12-12T17:30:54.842840678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:54.845105 containerd[1901]: time="2025-12-12T17:30:54.845031134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:30:54.845275 containerd[1901]: time="2025-12-12T17:30:54.845155166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:30:54.845519 kubelet[3599]: E1212 17:30:54.845456 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:54.845793 kubelet[3599]: E1212 17:30:54.845525 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:30:54.846093 containerd[1901]: time="2025-12-12T17:30:54.845941022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:30:54.846490 kubelet[3599]: E1212 17:30:54.846352 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bb58fbcd4-x7fhr_calico-apiserver(358ee8cb-07e7-4336-8448-2d22cafc7817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:54.846490 kubelet[3599]: E1212 17:30:54.846429 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:30:55.135025 containerd[1901]: time="2025-12-12T17:30:55.134707295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:55.137439 containerd[1901]: time="2025-12-12T17:30:55.137369123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:30:55.138675 containerd[1901]: time="2025-12-12T17:30:55.137435615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:30:55.138818 kubelet[3599]: E1212 17:30:55.137805 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:30:55.138818 kubelet[3599]: E1212 17:30:55.137866 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:30:55.138818 kubelet[3599]: E1212 17:30:55.137964 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:55.139034 kubelet[3599]: E1212 17:30:55.138033 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:30:59.254622 systemd[1]: Started sshd@21-172.31.24.26:22-147.75.109.163:36620.service - OpenSSH per-connection server daemon (147.75.109.163:36620). Dec 12 17:30:59.445616 sshd[5804]: Accepted publickey for core from 147.75.109.163 port 36620 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:30:59.448669 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:30:59.457255 systemd-logind[1874]: New session 22 of user core. Dec 12 17:30:59.463599 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:30:59.714587 sshd[5807]: Connection closed by 147.75.109.163 port 36620 Dec 12 17:30:59.716377 sshd-session[5804]: pam_unix(sshd:session): session closed for user core Dec 12 17:30:59.725696 systemd-logind[1874]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:30:59.726516 systemd[1]: sshd@21-172.31.24.26:22-147.75.109.163:36620.service: Deactivated successfully. Dec 12 17:30:59.732073 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:30:59.737866 systemd-logind[1874]: Removed session 22. Dec 12 17:31:00.248010 kubelet[3599]: E1212 17:31:00.247943 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:31:01.255831 containerd[1901]: time="2025-12-12T17:31:01.255758454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:31:01.556794 containerd[1901]: time="2025-12-12T17:31:01.556494211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:31:01.558805 containerd[1901]: time="2025-12-12T17:31:01.558712831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:31:01.558805 containerd[1901]: time="2025-12-12T17:31:01.558781363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:31:01.559332 kubelet[3599]: E1212 17:31:01.559267 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:31:01.559881 kubelet[3599]: E1212 17:31:01.559351 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:31:01.559881 kubelet[3599]: E1212 17:31:01.559495 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-224vp_calico-system(68e53e1a-54da-4cf3-b329-4a29532261fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:31:01.559881 kubelet[3599]: E1212 17:31:01.559549 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:31:04.250889 kubelet[3599]: E1212 17:31:04.250778 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:31:04.755409 systemd[1]: Started sshd@22-172.31.24.26:22-147.75.109.163:53336.service - OpenSSH per-connection server daemon (147.75.109.163:53336). Dec 12 17:31:04.966469 sshd[5818]: Accepted publickey for core from 147.75.109.163 port 53336 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:31:04.970445 sshd-session[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:31:04.986814 systemd-logind[1874]: New session 23 of user core. Dec 12 17:31:04.991899 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:31:05.249357 kubelet[3599]: E1212 17:31:05.248952 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:31:05.330290 sshd[5821]: Connection closed by 147.75.109.163 port 53336 Dec 12 17:31:05.331251 sshd-session[5818]: pam_unix(sshd:session): session closed for user core Dec 12 17:31:05.340538 systemd[1]: sshd@22-172.31.24.26:22-147.75.109.163:53336.service: Deactivated successfully. Dec 12 17:31:05.348801 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:31:05.352715 systemd-logind[1874]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:31:05.356792 systemd-logind[1874]: Removed session 23. Dec 12 17:31:07.248881 kubelet[3599]: E1212 17:31:07.248808 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:31:09.253074 kubelet[3599]: E1212 17:31:09.252846 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:31:10.374786 systemd[1]: Started sshd@23-172.31.24.26:22-147.75.109.163:53348.service - OpenSSH per-connection server daemon (147.75.109.163:53348). Dec 12 17:31:10.586504 sshd[5862]: Accepted publickey for core from 147.75.109.163 port 53348 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:31:10.589950 sshd-session[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:31:10.601051 systemd-logind[1874]: New session 24 of user core. Dec 12 17:31:10.610884 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:31:10.917864 sshd[5865]: Connection closed by 147.75.109.163 port 53348 Dec 12 17:31:10.919751 sshd-session[5862]: pam_unix(sshd:session): session closed for user core Dec 12 17:31:10.926975 systemd[1]: sshd@23-172.31.24.26:22-147.75.109.163:53348.service: Deactivated successfully. Dec 12 17:31:10.933357 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:31:10.936967 systemd-logind[1874]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:31:10.942354 systemd-logind[1874]: Removed session 24. Dec 12 17:31:11.253985 kubelet[3599]: E1212 17:31:11.253708 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:31:13.258490 kubelet[3599]: E1212 17:31:13.258407 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:31:15.963818 systemd[1]: Started sshd@24-172.31.24.26:22-147.75.109.163:52208.service - OpenSSH per-connection server daemon (147.75.109.163:52208). Dec 12 17:31:16.218449 sshd[5880]: Accepted publickey for core from 147.75.109.163 port 52208 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:31:16.224450 sshd-session[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:31:16.237733 systemd-logind[1874]: New session 25 of user core. Dec 12 17:31:16.246660 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:31:16.258700 kubelet[3599]: E1212 17:31:16.258611 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:31:16.550971 sshd[5883]: Connection closed by 147.75.109.163 port 52208 Dec 12 17:31:16.551811 sshd-session[5880]: pam_unix(sshd:session): session closed for user core Dec 12 17:31:16.564991 systemd-logind[1874]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:31:16.566880 systemd[1]: sshd@24-172.31.24.26:22-147.75.109.163:52208.service: Deactivated successfully. Dec 12 17:31:16.573362 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:31:16.578714 systemd-logind[1874]: Removed session 25. Dec 12 17:31:18.248335 kubelet[3599]: E1212 17:31:18.248196 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:31:19.248298 kubelet[3599]: E1212 17:31:19.248220 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:31:21.588891 systemd[1]: Started sshd@25-172.31.24.26:22-147.75.109.163:52220.service - OpenSSH per-connection server daemon (147.75.109.163:52220). Dec 12 17:31:21.784360 sshd[5897]: Accepted publickey for core from 147.75.109.163 port 52220 ssh2: RSA SHA256:hFEBiHUGPZODsqsSKl9oWamzWKoAOgSo70JAQAO5bgs Dec 12 17:31:21.788533 sshd-session[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:31:21.804449 systemd-logind[1874]: New session 26 of user core. Dec 12 17:31:21.810135 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 17:31:22.103171 sshd[5900]: Connection closed by 147.75.109.163 port 52220 Dec 12 17:31:22.103691 sshd-session[5897]: pam_unix(sshd:session): session closed for user core Dec 12 17:31:22.110900 systemd[1]: sshd@25-172.31.24.26:22-147.75.109.163:52220.service: Deactivated successfully. Dec 12 17:31:22.118891 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 17:31:22.126251 systemd-logind[1874]: Session 26 logged out. Waiting for processes to exit. Dec 12 17:31:22.129890 systemd-logind[1874]: Removed session 26. Dec 12 17:31:24.251200 kubelet[3599]: E1212 17:31:24.251112 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:31:25.248656 kubelet[3599]: E1212 17:31:25.248104 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:31:27.249235 kubelet[3599]: E1212 17:31:27.249090 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:31:28.248701 kubelet[3599]: E1212 17:31:28.248570 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:31:29.248382 kubelet[3599]: E1212 17:31:29.248209 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:31:34.249282 containerd[1901]: time="2025-12-12T17:31:34.248973302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:31:34.536701 containerd[1901]: time="2025-12-12T17:31:34.536586279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:31:34.538959 containerd[1901]: time="2025-12-12T17:31:34.538817415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:31:34.538959 containerd[1901]: time="2025-12-12T17:31:34.538900263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:31:34.539186 kubelet[3599]: E1212 17:31:34.539089 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:31:34.539186 kubelet[3599]: E1212 17:31:34.539150 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:31:34.540666 kubelet[3599]: E1212 17:31:34.539258 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-65c4f9478f-pv7hn_calico-system(b78f3469-6603-4b67-beed-705184b4511e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:31:34.540666 kubelet[3599]: E1212 17:31:34.539344 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:31:35.499931 systemd[1]: cri-containerd-f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4.scope: Deactivated successfully. Dec 12 17:31:35.500528 systemd[1]: cri-containerd-f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4.scope: Consumed 7.691s CPU time, 62.3M memory peak. Dec 12 17:31:35.507727 containerd[1901]: time="2025-12-12T17:31:35.507476272Z" level=info msg="received container exit event container_id:\"f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4\" id:\"f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4\" pid:3147 exit_status:1 exited_at:{seconds:1765560695 nanos:505988152}" Dec 12 17:31:35.566790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4-rootfs.mount: Deactivated successfully. Dec 12 17:31:36.101295 kubelet[3599]: I1212 17:31:36.100757 3599 scope.go:117] "RemoveContainer" containerID="f0d8071c444c3be20f6e274245e78f2cb2af574b4f3cc3fe666c2763a5b8aec4" Dec 12 17:31:36.104966 containerd[1901]: time="2025-12-12T17:31:36.104895615Z" level=info msg="CreateContainer within sandbox \"c228c0a691660d1f0c92ffffe7ca1964fd082d8ad2e881373f5ad827c7c08b7f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 12 17:31:36.125117 containerd[1901]: time="2025-12-12T17:31:36.123627339Z" level=info msg="Container 067466b01026397d19a4e482c587404473ec724345f7ef6764e7cd9d4e3fe4ca: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:31:36.146271 containerd[1901]: time="2025-12-12T17:31:36.146219271Z" level=info msg="CreateContainer within sandbox \"c228c0a691660d1f0c92ffffe7ca1964fd082d8ad2e881373f5ad827c7c08b7f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"067466b01026397d19a4e482c587404473ec724345f7ef6764e7cd9d4e3fe4ca\"" Dec 12 17:31:36.147374 containerd[1901]: time="2025-12-12T17:31:36.147291351Z" level=info msg="StartContainer for \"067466b01026397d19a4e482c587404473ec724345f7ef6764e7cd9d4e3fe4ca\"" Dec 12 17:31:36.149471 containerd[1901]: time="2025-12-12T17:31:36.149410431Z" level=info msg="connecting to shim 067466b01026397d19a4e482c587404473ec724345f7ef6764e7cd9d4e3fe4ca" address="unix:///run/containerd/s/c7b679410c9947bbf179b637f21409936b303b2642d519db5bcb52d863771be2" protocol=ttrpc version=3 Dec 12 17:31:36.191634 systemd[1]: Started cri-containerd-067466b01026397d19a4e482c587404473ec724345f7ef6764e7cd9d4e3fe4ca.scope - libcontainer container 067466b01026397d19a4e482c587404473ec724345f7ef6764e7cd9d4e3fe4ca. Dec 12 17:31:36.251197 containerd[1901]: time="2025-12-12T17:31:36.250948227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:31:36.283618 containerd[1901]: time="2025-12-12T17:31:36.283553944Z" level=info msg="StartContainer for \"067466b01026397d19a4e482c587404473ec724345f7ef6764e7cd9d4e3fe4ca\" returns successfully" Dec 12 17:31:36.503920 containerd[1901]: time="2025-12-12T17:31:36.503764757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:31:36.505971 containerd[1901]: time="2025-12-12T17:31:36.505896869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:31:36.506333 containerd[1901]: time="2025-12-12T17:31:36.506273129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:31:36.506477 kubelet[3599]: E1212 17:31:36.506419 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:31:36.506577 kubelet[3599]: E1212 17:31:36.506487 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:31:36.506670 kubelet[3599]: E1212 17:31:36.506593 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:31:36.508702 containerd[1901]: time="2025-12-12T17:31:36.508652669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:31:36.786450 containerd[1901]: time="2025-12-12T17:31:36.786385506Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:31:36.788787 containerd[1901]: time="2025-12-12T17:31:36.788710530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:31:36.788994 containerd[1901]: time="2025-12-12T17:31:36.788861298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:31:36.789151 kubelet[3599]: E1212 17:31:36.789092 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:31:36.789261 kubelet[3599]: E1212 17:31:36.789160 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:31:36.789434 kubelet[3599]: E1212 17:31:36.789276 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-ljkxc_calico-system(dfaeee63-32d9-4902-9d2a-576429123236): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:31:36.789582 kubelet[3599]: E1212 17:31:36.789377 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:31:37.004626 systemd[1]: cri-containerd-abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c.scope: Deactivated successfully. Dec 12 17:31:37.005190 systemd[1]: cri-containerd-abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c.scope: Consumed 24.169s CPU time, 93.3M memory peak. Dec 12 17:31:37.010309 containerd[1901]: time="2025-12-12T17:31:37.010125363Z" level=info msg="received container exit event container_id:\"abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c\" id:\"abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c\" pid:3923 exit_status:1 exited_at:{seconds:1765560697 nanos:9125091}" Dec 12 17:31:37.059779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c-rootfs.mount: Deactivated successfully. Dec 12 17:31:37.116665 kubelet[3599]: I1212 17:31:37.116614 3599 scope.go:117] "RemoveContainer" containerID="abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c" Dec 12 17:31:37.122339 containerd[1901]: time="2025-12-12T17:31:37.121874068Z" level=info msg="CreateContainer within sandbox \"a7ea4c4de742ef3b7f8281c40d1902097bcb47f527ee8997f4cf6f845d4d1b7c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 17:31:37.141237 containerd[1901]: time="2025-12-12T17:31:37.141172336Z" level=info msg="Container bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:31:37.169670 containerd[1901]: time="2025-12-12T17:31:37.169606096Z" level=info msg="CreateContainer within sandbox \"a7ea4c4de742ef3b7f8281c40d1902097bcb47f527ee8997f4cf6f845d4d1b7c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad\"" Dec 12 17:31:37.170589 containerd[1901]: time="2025-12-12T17:31:37.170536324Z" level=info msg="StartContainer for \"bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad\"" Dec 12 17:31:37.173336 containerd[1901]: time="2025-12-12T17:31:37.173232568Z" level=info msg="connecting to shim bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad" address="unix:///run/containerd/s/c7774c9d8b0ec53cc1153dc325e12f66df23b0dd591d624cbc84f3740613d359" protocol=ttrpc version=3 Dec 12 17:31:37.234638 systemd[1]: Started cri-containerd-bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad.scope - libcontainer container bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad. Dec 12 17:31:37.256433 containerd[1901]: time="2025-12-12T17:31:37.256369720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:31:37.335957 containerd[1901]: time="2025-12-12T17:31:37.335720117Z" level=info msg="StartContainer for \"bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad\" returns successfully" Dec 12 17:31:37.537634 containerd[1901]: time="2025-12-12T17:31:37.537500226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:31:37.540658 containerd[1901]: time="2025-12-12T17:31:37.540528678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:31:37.541052 containerd[1901]: time="2025-12-12T17:31:37.540585582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:31:37.541457 kubelet[3599]: E1212 17:31:37.541277 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:31:37.541457 kubelet[3599]: E1212 17:31:37.541421 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:31:37.541899 kubelet[3599]: E1212 17:31:37.541752 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bb58fbcd4-g9dtq_calico-apiserver(37d905b7-8baa-415e-b08a-01c4aafd5651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:31:37.541899 kubelet[3599]: E1212 17:31:37.541856 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:31:39.249392 kubelet[3599]: E1212 17:31:39.249269 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:31:39.253359 containerd[1901]: time="2025-12-12T17:31:39.252884658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:31:39.589215 containerd[1901]: time="2025-12-12T17:31:39.589137248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:31:39.591417 containerd[1901]: time="2025-12-12T17:31:39.591348836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:31:39.591547 containerd[1901]: time="2025-12-12T17:31:39.591458996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:31:39.591866 kubelet[3599]: E1212 17:31:39.591803 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:31:39.591973 kubelet[3599]: E1212 17:31:39.591874 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:31:39.592038 kubelet[3599]: E1212 17:31:39.591983 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6ccf855d9b-zb2xt_calico-system(9da8aa8d-66f3-492c-808d-d01d872ee6b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:31:39.593522 containerd[1901]: time="2025-12-12T17:31:39.593407052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:31:39.881240 containerd[1901]: time="2025-12-12T17:31:39.880971058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:31:39.883343 containerd[1901]: time="2025-12-12T17:31:39.883209106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:31:39.883343 containerd[1901]: time="2025-12-12T17:31:39.883284346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:31:39.884039 kubelet[3599]: E1212 17:31:39.883713 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:31:39.884039 kubelet[3599]: E1212 17:31:39.883775 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:31:39.884039 kubelet[3599]: E1212 17:31:39.883905 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6ccf855d9b-zb2xt_calico-system(9da8aa8d-66f3-492c-808d-d01d872ee6b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:31:39.884261 kubelet[3599]: E1212 17:31:39.883973 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:31:40.684273 kubelet[3599]: E1212 17:31:40.684181 3599 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-26?timeout=10s\": context deadline exceeded" Dec 12 17:31:42.335621 systemd[1]: cri-containerd-b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865.scope: Deactivated successfully. Dec 12 17:31:42.337882 systemd[1]: cri-containerd-b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865.scope: Consumed 6.009s CPU time, 22.9M memory peak. Dec 12 17:31:42.343273 containerd[1901]: time="2025-12-12T17:31:42.343179082Z" level=info msg="received container exit event container_id:\"b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865\" id:\"b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865\" pid:3172 exit_status:1 exited_at:{seconds:1765560702 nanos:341887378}" Dec 12 17:31:42.390342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865-rootfs.mount: Deactivated successfully. Dec 12 17:31:43.145356 kubelet[3599]: I1212 17:31:43.145279 3599 scope.go:117] "RemoveContainer" containerID="b4eb5f0fffba4f84e0d7e996742ae40176da73d1bda96473e3029d6ba4d7e865" Dec 12 17:31:43.150385 containerd[1901]: time="2025-12-12T17:31:43.150070750Z" level=info msg="CreateContainer within sandbox \"9c3792762efbeb759cdadb6ec8b23a6afd09bc4c2734f21e9454e39b49b2bc46\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 12 17:31:43.171882 containerd[1901]: time="2025-12-12T17:31:43.171815542Z" level=info msg="Container 12feb831bc7bd27ec49412c39ba53eef97d459381395268e8f08ee71a709cdbb: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:31:43.193966 containerd[1901]: time="2025-12-12T17:31:43.193900798Z" level=info msg="CreateContainer within sandbox \"9c3792762efbeb759cdadb6ec8b23a6afd09bc4c2734f21e9454e39b49b2bc46\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"12feb831bc7bd27ec49412c39ba53eef97d459381395268e8f08ee71a709cdbb\"" Dec 12 17:31:43.195260 containerd[1901]: time="2025-12-12T17:31:43.195210322Z" level=info msg="StartContainer for \"12feb831bc7bd27ec49412c39ba53eef97d459381395268e8f08ee71a709cdbb\"" Dec 12 17:31:43.197530 containerd[1901]: time="2025-12-12T17:31:43.197474698Z" level=info msg="connecting to shim 12feb831bc7bd27ec49412c39ba53eef97d459381395268e8f08ee71a709cdbb" address="unix:///run/containerd/s/0cc4e05d3534dcb316cc77cfcd6d510aaa8ce38c4bbe2c9b53c555781b118205" protocol=ttrpc version=3 Dec 12 17:31:43.246942 systemd[1]: Started cri-containerd-12feb831bc7bd27ec49412c39ba53eef97d459381395268e8f08ee71a709cdbb.scope - libcontainer container 12feb831bc7bd27ec49412c39ba53eef97d459381395268e8f08ee71a709cdbb. Dec 12 17:31:43.344971 containerd[1901]: time="2025-12-12T17:31:43.344904311Z" level=info msg="StartContainer for \"12feb831bc7bd27ec49412c39ba53eef97d459381395268e8f08ee71a709cdbb\" returns successfully" Dec 12 17:31:44.248997 containerd[1901]: time="2025-12-12T17:31:44.248939819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:31:44.569471 containerd[1901]: time="2025-12-12T17:31:44.569408233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:31:44.571632 containerd[1901]: time="2025-12-12T17:31:44.571551193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:31:44.571747 containerd[1901]: time="2025-12-12T17:31:44.571691257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:31:44.572117 kubelet[3599]: E1212 17:31:44.572052 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:31:44.573059 kubelet[3599]: E1212 17:31:44.572696 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:31:44.573059 kubelet[3599]: E1212 17:31:44.572852 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6bb58fbcd4-x7fhr_calico-apiserver(358ee8cb-07e7-4336-8448-2d22cafc7817): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:31:44.573059 kubelet[3599]: E1212 17:31:44.572922 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-x7fhr" podUID="358ee8cb-07e7-4336-8448-2d22cafc7817" Dec 12 17:31:46.248135 kubelet[3599]: E1212 17:31:46.248067 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65c4f9478f-pv7hn" podUID="b78f3469-6603-4b67-beed-705184b4511e" Dec 12 17:31:48.250646 kubelet[3599]: E1212 17:31:48.250563 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ljkxc" podUID="dfaeee63-32d9-4902-9d2a-576429123236" Dec 12 17:31:48.859308 systemd[1]: cri-containerd-bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad.scope: Deactivated successfully. Dec 12 17:31:48.861996 containerd[1901]: time="2025-12-12T17:31:48.861684042Z" level=info msg="received container exit event container_id:\"bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad\" id:\"bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad\" pid:6016 exit_status:1 exited_at:{seconds:1765560708 nanos:860389002}" Dec 12 17:31:48.906543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad-rootfs.mount: Deactivated successfully. Dec 12 17:31:49.176522 kubelet[3599]: I1212 17:31:49.176310 3599 scope.go:117] "RemoveContainer" containerID="abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c" Dec 12 17:31:49.178339 kubelet[3599]: I1212 17:31:49.177932 3599 scope.go:117] "RemoveContainer" containerID="bbb7358529bd6b55365b62c9c9a5854e207bd77b3f23b9c7e99421604b97e7ad" Dec 12 17:31:49.178489 kubelet[3599]: E1212 17:31:49.178389 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-qsgtb_tigera-operator(e5ecd13c-d5f8-48d6-a8bf-2462b955ef30)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-qsgtb" podUID="e5ecd13c-d5f8-48d6-a8bf-2462b955ef30" Dec 12 17:31:49.183498 containerd[1901]: time="2025-12-12T17:31:49.183415084Z" level=info msg="RemoveContainer for \"abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c\"" Dec 12 17:31:49.197844 containerd[1901]: time="2025-12-12T17:31:49.197700196Z" level=info msg="RemoveContainer for \"abea987eb3cfa64bffab30e283ab0416f2db9b231f24b181a5a9734a5117ff4c\" returns successfully" Dec 12 17:31:50.248366 kubelet[3599]: E1212 17:31:50.248194 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bb58fbcd4-g9dtq" podUID="37d905b7-8baa-415e-b08a-01c4aafd5651" Dec 12 17:31:50.250303 kubelet[3599]: E1212 17:31:50.250240 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6ccf855d9b-zb2xt" podUID="9da8aa8d-66f3-492c-808d-d01d872ee6b8" Dec 12 17:31:50.250533 containerd[1901]: time="2025-12-12T17:31:50.250461365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:31:50.544600 containerd[1901]: time="2025-12-12T17:31:50.544445922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:31:50.546587 containerd[1901]: time="2025-12-12T17:31:50.546529938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:31:50.546716 containerd[1901]: time="2025-12-12T17:31:50.546646014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:31:50.546924 kubelet[3599]: E1212 17:31:50.546868 3599 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:31:50.547021 kubelet[3599]: E1212 17:31:50.546937 3599 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:31:50.547127 kubelet[3599]: E1212 17:31:50.547037 3599 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-224vp_calico-system(68e53e1a-54da-4cf3-b329-4a29532261fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:31:50.547195 kubelet[3599]: E1212 17:31:50.547108 3599 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-224vp" podUID="68e53e1a-54da-4cf3-b329-4a29532261fd" Dec 12 17:31:50.686414 kubelet[3599]: E1212 17:31:50.685697 3599 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-26?timeout=10s\": context deadline exceeded"