Jan 23 17:58:02.180429 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 17:58:02.180472 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Jan 23 16:10:02 -00 2026 Jan 23 17:58:02.180496 kernel: KASLR disabled due to lack of seed Jan 23 17:58:02.180512 kernel: efi: EFI v2.7 by EDK II Jan 23 17:58:02.184265 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78551598 Jan 23 17:58:02.184304 kernel: secureboot: Secure boot disabled Jan 23 17:58:02.184322 kernel: ACPI: Early table checksum verification disabled Jan 23 17:58:02.184339 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 17:58:02.184355 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 17:58:02.184371 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 17:58:02.184387 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 17:58:02.184415 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 23 17:58:02.184431 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 17:58:02.184446 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 17:58:02.184464 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 17:58:02.184481 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 17:58:02.184501 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 17:58:02.184518 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 17:58:02.184560 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 17:58:02.184578 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 17:58:02.184595 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 17:58:02.184612 kernel: printk: legacy bootconsole [uart0] enabled Jan 23 17:58:02.184628 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 17:58:02.184645 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:58:02.184661 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 23 17:58:02.184677 kernel: Zone ranges: Jan 23 17:58:02.184693 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 17:58:02.184716 kernel: DMA32 empty Jan 23 17:58:02.184733 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 17:58:02.184749 kernel: Device empty Jan 23 17:58:02.184765 kernel: Movable zone start for each node Jan 23 17:58:02.184781 kernel: Early memory node ranges Jan 23 17:58:02.184797 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 17:58:02.184812 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 17:58:02.184828 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 17:58:02.184844 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 17:58:02.184860 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 17:58:02.184875 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 17:58:02.184891 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 17:58:02.184912 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 17:58:02.184934 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 17:58:02.184951 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 17:58:02.184968 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 23 17:58:02.184985 kernel: psci: probing for conduit method from ACPI. Jan 23 17:58:02.185006 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 17:58:02.185023 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 17:58:02.185039 kernel: psci: Trusted OS migration not required Jan 23 17:58:02.185056 kernel: psci: SMC Calling Convention v1.1 Jan 23 17:58:02.185073 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 17:58:02.185089 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 17:58:02.185106 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 17:58:02.185124 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 17:58:02.185140 kernel: Detected PIPT I-cache on CPU0 Jan 23 17:58:02.185157 kernel: CPU features: detected: GIC system register CPU interface Jan 23 17:58:02.185174 kernel: CPU features: detected: Spectre-v2 Jan 23 17:58:02.185194 kernel: CPU features: detected: Spectre-v3a Jan 23 17:58:02.185211 kernel: CPU features: detected: Spectre-BHB Jan 23 17:58:02.185228 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 17:58:02.185244 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 17:58:02.185261 kernel: alternatives: applying boot alternatives Jan 23 17:58:02.185280 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:58:02.185298 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 17:58:02.185315 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 17:58:02.185332 kernel: Fallback order for Node 0: 0 Jan 23 17:58:02.185349 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 23 17:58:02.185365 kernel: Policy zone: Normal Jan 23 17:58:02.185386 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 17:58:02.185403 kernel: software IO TLB: area num 2. Jan 23 17:58:02.185420 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Jan 23 17:58:02.185437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 17:58:02.185454 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 17:58:02.185471 kernel: rcu: RCU event tracing is enabled. Jan 23 17:58:02.185489 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 17:58:02.185506 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 17:58:02.185523 kernel: Tracing variant of Tasks RCU enabled. Jan 23 17:58:02.185561 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 17:58:02.185579 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 17:58:02.185601 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:58:02.185618 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 17:58:02.185635 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 17:58:02.185651 kernel: GICv3: 96 SPIs implemented Jan 23 17:58:02.185668 kernel: GICv3: 0 Extended SPIs implemented Jan 23 17:58:02.185684 kernel: Root IRQ handler: gic_handle_irq Jan 23 17:58:02.185701 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 17:58:02.185718 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 17:58:02.185735 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 17:58:02.185752 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 17:58:02.185768 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 17:58:02.185786 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 23 17:58:02.185807 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 23 17:58:02.185824 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 17:58:02.185841 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 23 17:58:02.185858 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 17:58:02.185874 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 17:58:02.185891 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 17:58:02.185908 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 17:58:02.185925 kernel: Console: colour dummy device 80x25 Jan 23 17:58:02.185943 kernel: printk: legacy console [tty1] enabled Jan 23 17:58:02.185960 kernel: ACPI: Core revision 20240827 Jan 23 17:58:02.185977 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 17:58:02.185998 kernel: pid_max: default: 32768 minimum: 301 Jan 23 17:58:02.186015 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 17:58:02.186032 kernel: landlock: Up and running. Jan 23 17:58:02.186049 kernel: SELinux: Initializing. Jan 23 17:58:02.186066 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:58:02.186083 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 17:58:02.186100 kernel: rcu: Hierarchical SRCU implementation. Jan 23 17:58:02.186118 kernel: rcu: Max phase no-delay instances is 400. Jan 23 17:58:02.186135 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 17:58:02.186156 kernel: Remapping and enabling EFI services. Jan 23 17:58:02.186173 kernel: smp: Bringing up secondary CPUs ... Jan 23 17:58:02.186190 kernel: Detected PIPT I-cache on CPU1 Jan 23 17:58:02.186207 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 17:58:02.186224 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 23 17:58:02.186242 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 17:58:02.186259 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 17:58:02.186276 kernel: SMP: Total of 2 processors activated. Jan 23 17:58:02.186293 kernel: CPU: All CPU(s) started at EL1 Jan 23 17:58:02.186322 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 17:58:02.186341 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 17:58:02.186362 kernel: CPU features: detected: CRC32 instructions Jan 23 17:58:02.186380 kernel: alternatives: applying system-wide alternatives Jan 23 17:58:02.186398 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Jan 23 17:58:02.186417 kernel: devtmpfs: initialized Jan 23 17:58:02.186435 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 17:58:02.186457 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 17:58:02.186475 kernel: 16880 pages in range for non-PLT usage Jan 23 17:58:02.186492 kernel: 508400 pages in range for PLT usage Jan 23 17:58:02.186510 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 17:58:02.186554 kernel: SMBIOS 3.0.0 present. Jan 23 17:58:02.186577 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 17:58:02.186596 kernel: DMI: Memory slots populated: 0/0 Jan 23 17:58:02.186614 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 17:58:02.186632 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 17:58:02.186656 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 17:58:02.186675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 17:58:02.186693 kernel: audit: initializing netlink subsys (disabled) Jan 23 17:58:02.186711 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jan 23 17:58:02.186728 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 17:58:02.186746 kernel: cpuidle: using governor menu Jan 23 17:58:02.186764 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 17:58:02.186799 kernel: ASID allocator initialised with 65536 entries Jan 23 17:58:02.186819 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 17:58:02.186842 kernel: Serial: AMBA PL011 UART driver Jan 23 17:58:02.186861 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 17:58:02.186879 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 17:58:02.186897 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 17:58:02.186915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 17:58:02.186933 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 17:58:02.186951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 17:58:02.186969 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 17:58:02.186986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 17:58:02.187008 kernel: ACPI: Added _OSI(Module Device) Jan 23 17:58:02.187026 kernel: ACPI: Added _OSI(Processor Device) Jan 23 17:58:02.187044 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 17:58:02.187063 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 17:58:02.187081 kernel: ACPI: Interpreter enabled Jan 23 17:58:02.187100 kernel: ACPI: Using GIC for interrupt routing Jan 23 17:58:02.187118 kernel: ACPI: MCFG table detected, 1 entries Jan 23 17:58:02.187136 kernel: ACPI: CPU0 has been hot-added Jan 23 17:58:02.187154 kernel: ACPI: CPU1 has been hot-added Jan 23 17:58:02.187177 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 17:58:02.187482 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 17:58:02.193208 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 17:58:02.193754 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 17:58:02.193944 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 17:58:02.194126 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 17:58:02.194151 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 17:58:02.194182 kernel: acpiphp: Slot [1] registered Jan 23 17:58:02.194201 kernel: acpiphp: Slot [2] registered Jan 23 17:58:02.194220 kernel: acpiphp: Slot [3] registered Jan 23 17:58:02.194238 kernel: acpiphp: Slot [4] registered Jan 23 17:58:02.194256 kernel: acpiphp: Slot [5] registered Jan 23 17:58:02.194274 kernel: acpiphp: Slot [6] registered Jan 23 17:58:02.194292 kernel: acpiphp: Slot [7] registered Jan 23 17:58:02.194310 kernel: acpiphp: Slot [8] registered Jan 23 17:58:02.194328 kernel: acpiphp: Slot [9] registered Jan 23 17:58:02.194346 kernel: acpiphp: Slot [10] registered Jan 23 17:58:02.194368 kernel: acpiphp: Slot [11] registered Jan 23 17:58:02.194386 kernel: acpiphp: Slot [12] registered Jan 23 17:58:02.194403 kernel: acpiphp: Slot [13] registered Jan 23 17:58:02.194422 kernel: acpiphp: Slot [14] registered Jan 23 17:58:02.194440 kernel: acpiphp: Slot [15] registered Jan 23 17:58:02.194458 kernel: acpiphp: Slot [16] registered Jan 23 17:58:02.194476 kernel: acpiphp: Slot [17] registered Jan 23 17:58:02.194494 kernel: acpiphp: Slot [18] registered Jan 23 17:58:02.194512 kernel: acpiphp: Slot [19] registered Jan 23 17:58:02.196574 kernel: acpiphp: Slot [20] registered Jan 23 17:58:02.196601 kernel: acpiphp: Slot [21] registered Jan 23 17:58:02.196619 kernel: acpiphp: Slot [22] registered Jan 23 17:58:02.196638 kernel: acpiphp: Slot [23] registered Jan 23 17:58:02.196656 kernel: acpiphp: Slot [24] registered Jan 23 17:58:02.196674 kernel: acpiphp: Slot [25] registered Jan 23 17:58:02.196692 kernel: acpiphp: Slot [26] registered Jan 23 17:58:02.196710 kernel: acpiphp: Slot [27] registered Jan 23 17:58:02.196728 kernel: acpiphp: Slot [28] registered Jan 23 17:58:02.196745 kernel: acpiphp: Slot [29] registered Jan 23 17:58:02.196772 kernel: acpiphp: Slot [30] registered Jan 23 17:58:02.196791 kernel: acpiphp: Slot [31] registered Jan 23 17:58:02.196809 kernel: PCI host bridge to bus 0000:00 Jan 23 17:58:02.197049 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 17:58:02.197225 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 17:58:02.197393 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 17:58:02.197592 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 17:58:02.197826 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 23 17:58:02.198039 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 23 17:58:02.198232 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 23 17:58:02.198442 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 23 17:58:02.200190 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 23 17:58:02.202648 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:58:02.202918 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 23 17:58:02.203111 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 23 17:58:02.203300 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 23 17:58:02.203487 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 23 17:58:02.203703 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 17:58:02.203881 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 17:58:02.204048 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 17:58:02.204221 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 17:58:02.204246 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 17:58:02.204265 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 17:58:02.204283 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 17:58:02.204301 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 17:58:02.204320 kernel: iommu: Default domain type: Translated Jan 23 17:58:02.204338 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 17:58:02.204356 kernel: efivars: Registered efivars operations Jan 23 17:58:02.204373 kernel: vgaarb: loaded Jan 23 17:58:02.204396 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 17:58:02.204414 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 17:58:02.204432 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 17:58:02.204450 kernel: pnp: PnP ACPI init Jan 23 17:58:02.204839 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 17:58:02.204868 kernel: pnp: PnP ACPI: found 1 devices Jan 23 17:58:02.204887 kernel: NET: Registered PF_INET protocol family Jan 23 17:58:02.204905 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 17:58:02.204929 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 17:58:02.204947 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 17:58:02.204966 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 17:58:02.204984 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 17:58:02.205002 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 17:58:02.205020 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:58:02.205038 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 17:58:02.205056 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 17:58:02.205075 kernel: PCI: CLS 0 bytes, default 64 Jan 23 17:58:02.205096 kernel: kvm [1]: HYP mode not available Jan 23 17:58:02.205115 kernel: Initialise system trusted keyrings Jan 23 17:58:02.205132 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 17:58:02.205150 kernel: Key type asymmetric registered Jan 23 17:58:02.205168 kernel: Asymmetric key parser 'x509' registered Jan 23 17:58:02.205186 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 17:58:02.205204 kernel: io scheduler mq-deadline registered Jan 23 17:58:02.205222 kernel: io scheduler kyber registered Jan 23 17:58:02.205239 kernel: io scheduler bfq registered Jan 23 17:58:02.205453 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 17:58:02.205479 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 17:58:02.205498 kernel: ACPI: button: Power Button [PWRB] Jan 23 17:58:02.205516 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 17:58:02.205553 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 17:58:02.205574 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 17:58:02.205593 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 17:58:02.205790 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 17:58:02.205820 kernel: printk: legacy console [ttyS0] disabled Jan 23 17:58:02.205839 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 17:58:02.205857 kernel: printk: legacy console [ttyS0] enabled Jan 23 17:58:02.205874 kernel: printk: legacy bootconsole [uart0] disabled Jan 23 17:58:02.205892 kernel: thunder_xcv, ver 1.0 Jan 23 17:58:02.205910 kernel: thunder_bgx, ver 1.0 Jan 23 17:58:02.205928 kernel: nicpf, ver 1.0 Jan 23 17:58:02.205945 kernel: nicvf, ver 1.0 Jan 23 17:58:02.206135 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 17:58:02.206316 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T17:58:01 UTC (1769191081) Jan 23 17:58:02.206341 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 17:58:02.206360 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 23 17:58:02.206378 kernel: NET: Registered PF_INET6 protocol family Jan 23 17:58:02.206395 kernel: watchdog: NMI not fully supported Jan 23 17:58:02.206413 kernel: watchdog: Hard watchdog permanently disabled Jan 23 17:58:02.206431 kernel: Segment Routing with IPv6 Jan 23 17:58:02.206449 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 17:58:02.206467 kernel: NET: Registered PF_PACKET protocol family Jan 23 17:58:02.206489 kernel: Key type dns_resolver registered Jan 23 17:58:02.206507 kernel: registered taskstats version 1 Jan 23 17:58:02.206525 kernel: Loading compiled-in X.509 certificates Jan 23 17:58:02.206594 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 3b281aa2bfe49764dd224485ec54e6070c82b8fb' Jan 23 17:58:02.206613 kernel: Demotion targets for Node 0: null Jan 23 17:58:02.206632 kernel: Key type .fscrypt registered Jan 23 17:58:02.206649 kernel: Key type fscrypt-provisioning registered Jan 23 17:58:02.206667 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 17:58:02.206685 kernel: ima: Allocated hash algorithm: sha1 Jan 23 17:58:02.206708 kernel: ima: No architecture policies found Jan 23 17:58:02.206726 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 17:58:02.206744 kernel: clk: Disabling unused clocks Jan 23 17:58:02.206762 kernel: PM: genpd: Disabling unused power domains Jan 23 17:58:02.206833 kernel: Warning: unable to open an initial console. Jan 23 17:58:02.206856 kernel: Freeing unused kernel memory: 39552K Jan 23 17:58:02.206874 kernel: Run /init as init process Jan 23 17:58:02.206892 kernel: with arguments: Jan 23 17:58:02.206910 kernel: /init Jan 23 17:58:02.206958 kernel: with environment: Jan 23 17:58:02.206978 kernel: HOME=/ Jan 23 17:58:02.206996 kernel: TERM=linux Jan 23 17:58:02.207016 systemd[1]: Successfully made /usr/ read-only. Jan 23 17:58:02.207040 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:58:02.207061 systemd[1]: Detected virtualization amazon. Jan 23 17:58:02.207081 systemd[1]: Detected architecture arm64. Jan 23 17:58:02.207104 systemd[1]: Running in initrd. Jan 23 17:58:02.207124 systemd[1]: No hostname configured, using default hostname. Jan 23 17:58:02.207144 systemd[1]: Hostname set to . Jan 23 17:58:02.207164 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:58:02.207183 systemd[1]: Queued start job for default target initrd.target. Jan 23 17:58:02.207203 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:58:02.207223 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:58:02.207243 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 17:58:02.207267 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:58:02.207288 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 17:58:02.207309 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 17:58:02.207330 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 17:58:02.207351 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 17:58:02.207372 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:58:02.207392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:58:02.207415 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:58:02.207435 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:58:02.207454 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:58:02.207474 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:58:02.207494 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:58:02.207514 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:58:02.207561 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 17:58:02.207585 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 17:58:02.207605 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:58:02.207632 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:58:02.207653 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:58:02.207673 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:58:02.207694 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 17:58:02.207715 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:58:02.207736 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 17:58:02.207757 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 17:58:02.207777 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 17:58:02.207802 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:58:02.207822 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:58:02.207842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:58:02.207862 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 17:58:02.207883 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:58:02.207907 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 17:58:02.207927 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 17:58:02.207948 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:58:02.208020 systemd-journald[259]: Collecting audit messages is disabled. Jan 23 17:58:02.208070 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 17:58:02.208092 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 17:58:02.208112 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 17:58:02.208132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:58:02.208152 kernel: Bridge firewalling registered Jan 23 17:58:02.208170 systemd-journald[259]: Journal started Jan 23 17:58:02.208211 systemd-journald[259]: Runtime Journal (/run/log/journal/ec26825426ad7da948b7ee4d752cbc2d) is 8M, max 75.3M, 67.3M free. Jan 23 17:58:02.212405 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:58:02.144427 systemd-modules-load[260]: Inserted module 'overlay' Jan 23 17:58:02.201825 systemd-modules-load[260]: Inserted module 'br_netfilter' Jan 23 17:58:02.226826 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:58:02.235026 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:58:02.237838 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:58:02.254616 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:58:02.276940 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:58:02.287234 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 17:58:02.302472 systemd-tmpfiles[284]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 17:58:02.309601 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:58:02.319214 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:58:02.328760 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:58:02.349301 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5fc6d8e43735a6d26d13c2f5b234025ac82c601a45144671feeb457ddade8f9d Jan 23 17:58:02.431617 systemd-resolved[302]: Positive Trust Anchors: Jan 23 17:58:02.431644 systemd-resolved[302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:58:02.431702 systemd-resolved[302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:58:02.527577 kernel: SCSI subsystem initialized Jan 23 17:58:02.535565 kernel: Loading iSCSI transport class v2.0-870. Jan 23 17:58:02.549736 kernel: iscsi: registered transport (tcp) Jan 23 17:58:02.572084 kernel: iscsi: registered transport (qla4xxx) Jan 23 17:58:02.572159 kernel: QLogic iSCSI HBA Driver Jan 23 17:58:02.612778 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:58:02.641390 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:58:02.645473 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:58:02.711588 kernel: random: crng init done Jan 23 17:58:02.712071 systemd-resolved[302]: Defaulting to hostname 'linux'. Jan 23 17:58:02.719918 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:58:02.725664 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:58:02.752598 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 17:58:02.757785 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 17:58:02.844593 kernel: raid6: neonx8 gen() 6476 MB/s Jan 23 17:58:02.861577 kernel: raid6: neonx4 gen() 6489 MB/s Jan 23 17:58:02.878587 kernel: raid6: neonx2 gen() 5390 MB/s Jan 23 17:58:02.895586 kernel: raid6: neonx1 gen() 3928 MB/s Jan 23 17:58:02.912594 kernel: raid6: int64x8 gen() 3604 MB/s Jan 23 17:58:02.929594 kernel: raid6: int64x4 gen() 3687 MB/s Jan 23 17:58:02.946587 kernel: raid6: int64x2 gen() 3547 MB/s Jan 23 17:58:02.964680 kernel: raid6: int64x1 gen() 2743 MB/s Jan 23 17:58:02.964755 kernel: raid6: using algorithm neonx4 gen() 6489 MB/s Jan 23 17:58:02.983613 kernel: raid6: .... xor() 4885 MB/s, rmw enabled Jan 23 17:58:02.983692 kernel: raid6: using neon recovery algorithm Jan 23 17:58:02.991583 kernel: xor: measuring software checksum speed Jan 23 17:58:02.993942 kernel: 8regs : 11543 MB/sec Jan 23 17:58:02.994004 kernel: 32regs : 13057 MB/sec Jan 23 17:58:02.995331 kernel: arm64_neon : 9027 MB/sec Jan 23 17:58:02.995388 kernel: xor: using function: 32regs (13057 MB/sec) Jan 23 17:58:03.090576 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 17:58:03.102630 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:58:03.112174 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:58:03.165338 systemd-udevd[507]: Using default interface naming scheme 'v255'. Jan 23 17:58:03.176045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:58:03.193010 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 17:58:03.233606 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Jan 23 17:58:03.280635 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:58:03.286665 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:58:03.441775 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:58:03.448976 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 17:58:03.644850 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 17:58:03.644939 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 17:58:03.651307 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 17:58:03.651674 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 17:58:03.652664 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:58:03.654559 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:58:03.667106 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:58:03.684803 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 17:58:03.684840 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 17:58:03.686306 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:3c:c0:eb:c6:c1 Jan 23 17:58:03.680169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:58:03.703192 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 17:58:03.700079 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:58:03.711249 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 17:58:03.711320 kernel: GPT:9289727 != 33554431 Jan 23 17:58:03.711345 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 17:58:03.714596 kernel: GPT:9289727 != 33554431 Jan 23 17:58:03.714686 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 17:58:03.716703 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:58:03.720920 (udev-worker)[563]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:03.749601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:58:03.772578 kernel: nvme nvme0: using unchecked data buffer Jan 23 17:58:03.910403 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 17:58:03.970805 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 17:58:04.000060 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:58:04.010654 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 17:58:04.051382 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 17:58:04.055353 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 17:58:04.061089 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:58:04.064625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:58:04.074834 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:58:04.081571 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 17:58:04.092246 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 17:58:04.124026 disk-uuid[686]: Primary Header is updated. Jan 23 17:58:04.124026 disk-uuid[686]: Secondary Entries is updated. Jan 23 17:58:04.124026 disk-uuid[686]: Secondary Header is updated. Jan 23 17:58:04.141613 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:58:04.149636 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:58:04.158576 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:58:05.161046 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 17:58:05.166277 disk-uuid[692]: The operation has completed successfully. Jan 23 17:58:05.361881 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 17:58:05.364803 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 17:58:05.451467 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 17:58:05.489254 sh[953]: Success Jan 23 17:58:05.519387 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 17:58:05.519467 kernel: device-mapper: uevent: version 1.0.3 Jan 23 17:58:05.520282 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 17:58:05.534570 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 17:58:05.629000 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 17:58:05.631061 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 17:58:05.658696 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 17:58:05.678584 kernel: BTRFS: device fsid 8784b097-3924-47e8-98b3-06e8cbe78a64 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (976) Jan 23 17:58:05.683129 kernel: BTRFS info (device dm-0): first mount of filesystem 8784b097-3924-47e8-98b3-06e8cbe78a64 Jan 23 17:58:05.683208 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:58:05.841904 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 17:58:05.841995 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 17:58:05.842022 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 17:58:05.871055 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 17:58:05.880714 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:58:05.885719 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 17:58:05.886915 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 17:58:05.900306 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 17:58:05.957583 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1011) Jan 23 17:58:05.962366 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:05.962443 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:58:05.983237 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:58:05.983309 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:58:05.992652 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:05.995338 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 17:58:06.001280 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 17:58:06.096869 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:58:06.114868 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:58:06.186509 systemd-networkd[1152]: lo: Link UP Jan 23 17:58:06.186558 systemd-networkd[1152]: lo: Gained carrier Jan 23 17:58:06.192065 systemd-networkd[1152]: Enumeration completed Jan 23 17:58:06.194428 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:58:06.194436 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:58:06.194444 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:58:06.208170 systemd[1]: Reached target network.target - Network. Jan 23 17:58:06.209553 systemd-networkd[1152]: eth0: Link UP Jan 23 17:58:06.209561 systemd-networkd[1152]: eth0: Gained carrier Jan 23 17:58:06.209583 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:58:06.243630 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.24.204/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:58:06.536668 ignition[1074]: Ignition 2.22.0 Jan 23 17:58:06.536690 ignition[1074]: Stage: fetch-offline Jan 23 17:58:06.537513 ignition[1074]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:06.538063 ignition[1074]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:06.538431 ignition[1074]: Ignition finished successfully Jan 23 17:58:06.549645 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:58:06.556964 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 17:58:06.602406 ignition[1164]: Ignition 2.22.0 Jan 23 17:58:06.602964 ignition[1164]: Stage: fetch Jan 23 17:58:06.604950 ignition[1164]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:06.604977 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:06.605159 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:06.620289 ignition[1164]: PUT result: OK Jan 23 17:58:06.623911 ignition[1164]: parsed url from cmdline: "" Jan 23 17:58:06.623926 ignition[1164]: no config URL provided Jan 23 17:58:06.623941 ignition[1164]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 17:58:06.623965 ignition[1164]: no config at "/usr/lib/ignition/user.ign" Jan 23 17:58:06.623997 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:06.627315 ignition[1164]: PUT result: OK Jan 23 17:58:06.627412 ignition[1164]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 17:58:06.637340 ignition[1164]: GET result: OK Jan 23 17:58:06.637513 ignition[1164]: parsing config with SHA512: 49171d4d5b91479e09ea26ab1fca541186163c62329473df818bd801072a676951cb8a753616a33a5b7a8a66f6b5cc252026d4375a3bc78c8506e1566e10f01f Jan 23 17:58:06.648663 unknown[1164]: fetched base config from "system" Jan 23 17:58:06.648695 unknown[1164]: fetched base config from "system" Jan 23 17:58:06.650244 ignition[1164]: fetch: fetch complete Jan 23 17:58:06.648718 unknown[1164]: fetched user config from "aws" Jan 23 17:58:06.650265 ignition[1164]: fetch: fetch passed Jan 23 17:58:06.650376 ignition[1164]: Ignition finished successfully Jan 23 17:58:06.664068 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 17:58:06.669762 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 17:58:06.722051 ignition[1170]: Ignition 2.22.0 Jan 23 17:58:06.722080 ignition[1170]: Stage: kargs Jan 23 17:58:06.723253 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:06.723286 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:06.723446 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:06.725666 ignition[1170]: PUT result: OK Jan 23 17:58:06.737997 ignition[1170]: kargs: kargs passed Jan 23 17:58:06.738103 ignition[1170]: Ignition finished successfully Jan 23 17:58:06.744947 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 17:58:06.751039 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 17:58:06.800176 ignition[1176]: Ignition 2.22.0 Jan 23 17:58:06.800600 ignition[1176]: Stage: disks Jan 23 17:58:06.801163 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:06.801187 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:06.801335 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:06.812051 ignition[1176]: PUT result: OK Jan 23 17:58:06.817809 ignition[1176]: disks: disks passed Jan 23 17:58:06.817929 ignition[1176]: Ignition finished successfully Jan 23 17:58:06.824038 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 17:58:06.829481 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 17:58:06.832452 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 17:58:06.837660 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:58:06.838214 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:58:06.838990 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:58:06.840856 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 17:58:06.905649 systemd-fsck[1185]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 17:58:06.911312 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 17:58:06.918963 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 17:58:07.054595 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5f1f19a2-81b4-48e9-bfdb-d3843ff70e8e r/w with ordered data mode. Quota mode: none. Jan 23 17:58:07.055990 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 17:58:07.060859 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 17:58:07.067483 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:58:07.073836 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 17:58:07.079230 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 17:58:07.083745 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 17:58:07.086013 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:58:07.107706 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 17:58:07.114690 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 17:58:07.131088 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1204) Jan 23 17:58:07.135317 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:07.135363 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:58:07.142784 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:58:07.142854 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:58:07.146142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:58:07.393731 systemd-networkd[1152]: eth0: Gained IPv6LL Jan 23 17:58:07.472981 initrd-setup-root[1228]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 17:58:07.493226 initrd-setup-root[1235]: cut: /sysroot/etc/group: No such file or directory Jan 23 17:58:07.512577 initrd-setup-root[1242]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 17:58:07.532184 initrd-setup-root[1249]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 17:58:07.868155 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 17:58:07.873924 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 17:58:07.882484 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 17:58:07.911790 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 17:58:07.915371 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:07.946721 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 17:58:07.970143 ignition[1317]: INFO : Ignition 2.22.0 Jan 23 17:58:07.972734 ignition[1317]: INFO : Stage: mount Jan 23 17:58:07.974645 ignition[1317]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:07.974645 ignition[1317]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:07.974645 ignition[1317]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:07.984790 ignition[1317]: INFO : PUT result: OK Jan 23 17:58:07.989127 ignition[1317]: INFO : mount: mount passed Jan 23 17:58:07.991146 ignition[1317]: INFO : Ignition finished successfully Jan 23 17:58:07.995965 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 17:58:08.002600 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 17:58:08.059374 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 17:58:08.099566 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1328) Jan 23 17:58:08.105410 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fef013c8-c90f-4bd4-8573-9f69d2a021ca Jan 23 17:58:08.105485 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 17:58:08.113775 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 17:58:08.113849 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 17:58:08.117856 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 17:58:08.177321 ignition[1345]: INFO : Ignition 2.22.0 Jan 23 17:58:08.179552 ignition[1345]: INFO : Stage: files Jan 23 17:58:08.179552 ignition[1345]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:08.179552 ignition[1345]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:08.179552 ignition[1345]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:08.191387 ignition[1345]: INFO : PUT result: OK Jan 23 17:58:08.196453 ignition[1345]: DEBUG : files: compiled without relabeling support, skipping Jan 23 17:58:08.200160 ignition[1345]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 17:58:08.200160 ignition[1345]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 17:58:08.223492 ignition[1345]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 17:58:08.227075 ignition[1345]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 17:58:08.230951 unknown[1345]: wrote ssh authorized keys file for user: core Jan 23 17:58:08.233560 ignition[1345]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 17:58:08.240030 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:58:08.240030 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 17:58:09.200217 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 17:58:11.745718 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 17:58:11.750973 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 17:58:11.750973 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 17:58:11.750973 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:58:11.750973 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 17:58:11.750973 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:58:11.750973 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 17:58:11.750973 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:58:11.780208 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 17:58:11.784448 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:58:11.789049 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 17:58:11.789049 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:58:11.801941 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:58:11.801941 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:58:11.801941 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 17:58:12.253831 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 17:58:12.652612 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 17:58:12.652612 ignition[1345]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 17:58:12.666949 ignition[1345]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:58:12.676048 ignition[1345]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 17:58:12.676048 ignition[1345]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 17:58:12.676048 ignition[1345]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 17:58:12.676048 ignition[1345]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 17:58:12.676048 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:58:12.676048 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 17:58:12.676048 ignition[1345]: INFO : files: files passed Jan 23 17:58:12.676048 ignition[1345]: INFO : Ignition finished successfully Jan 23 17:58:12.704656 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 17:58:12.709097 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 17:58:12.719637 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 17:58:12.745249 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 17:58:12.746041 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 17:58:12.765124 initrd-setup-root-after-ignition[1379]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:58:12.769097 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:58:12.769097 initrd-setup-root-after-ignition[1375]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 17:58:12.777964 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:58:12.781812 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 17:58:12.792929 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 17:58:12.871963 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 17:58:12.874945 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 17:58:12.882102 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 17:58:12.887005 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 17:58:12.891686 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 17:58:12.897815 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 17:58:12.942759 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:58:12.950610 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 17:58:12.993085 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:58:12.994323 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:58:12.996926 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 17:58:12.997870 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 17:58:12.998202 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 17:58:12.999320 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 17:58:12.999798 systemd[1]: Stopped target basic.target - Basic System. Jan 23 17:58:13.000110 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 17:58:13.000492 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 17:58:13.001278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 17:58:13.001641 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 17:58:13.001996 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 17:58:13.002387 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 17:58:13.003849 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 17:58:13.011486 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 17:58:13.012442 systemd[1]: Stopped target swap.target - Swaps. Jan 23 17:58:13.013438 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 17:58:13.013834 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 17:58:13.019318 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:58:13.020313 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:58:13.021338 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 17:58:13.034703 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:58:13.035456 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 17:58:13.035799 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 17:58:13.053446 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 17:58:13.054250 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 17:58:13.060123 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 17:58:13.060453 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 17:58:13.081763 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 17:58:13.091026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 17:58:13.091380 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:58:13.115375 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 17:58:13.118058 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 17:58:13.118557 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:58:13.127053 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 17:58:13.127436 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 17:58:13.169347 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 17:58:13.171908 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 17:58:13.212425 ignition[1399]: INFO : Ignition 2.22.0 Jan 23 17:58:13.214725 ignition[1399]: INFO : Stage: umount Jan 23 17:58:13.214725 ignition[1399]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 17:58:13.214725 ignition[1399]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 17:58:13.214725 ignition[1399]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 17:58:13.227579 ignition[1399]: INFO : PUT result: OK Jan 23 17:58:13.232862 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 17:58:13.237019 ignition[1399]: INFO : umount: umount passed Jan 23 17:58:13.242966 ignition[1399]: INFO : Ignition finished successfully Jan 23 17:58:13.249307 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 17:58:13.251889 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 17:58:13.259274 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 17:58:13.259485 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 17:58:13.265823 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 17:58:13.266352 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 17:58:13.273154 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 17:58:13.273431 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 17:58:13.282518 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 17:58:13.282674 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 17:58:13.287817 systemd[1]: Stopped target network.target - Network. Jan 23 17:58:13.298875 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 17:58:13.298999 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 17:58:13.302219 systemd[1]: Stopped target paths.target - Path Units. Jan 23 17:58:13.309154 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 17:58:13.313813 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:58:13.317033 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 17:58:13.319841 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 17:58:13.325234 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 17:58:13.325316 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 17:58:13.327948 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 17:58:13.328023 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 17:58:13.334560 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 17:58:13.334695 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 17:58:13.337458 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 17:58:13.337598 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 17:58:13.344410 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 17:58:13.344524 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 17:58:13.347454 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 17:58:13.354344 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 17:58:13.379213 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 17:58:13.379492 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 17:58:13.403375 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 17:58:13.403908 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 17:58:13.403991 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:58:13.418075 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 17:58:13.439302 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 17:58:13.439611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 17:58:13.449296 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 17:58:13.449846 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 17:58:13.454948 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 17:58:13.455090 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:58:13.461102 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 17:58:13.467428 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 17:58:13.467570 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 17:58:13.475611 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 17:58:13.475719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:58:13.487176 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 17:58:13.487271 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 17:58:13.501781 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:58:13.512242 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 17:58:13.533788 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 17:58:13.534073 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:58:13.540977 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 17:58:13.541465 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 17:58:13.544716 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 17:58:13.544793 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:58:13.552892 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 17:58:13.552998 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 17:58:13.562959 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 17:58:13.563073 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 17:58:13.570548 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 17:58:13.570666 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 17:58:13.579187 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 17:58:13.593853 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 17:58:13.594158 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:58:13.603466 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 17:58:13.603821 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:58:13.612414 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 17:58:13.612524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:58:13.639887 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 17:58:13.641319 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 17:58:13.651848 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 17:58:13.653599 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 17:58:13.658051 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 17:58:13.662918 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 17:58:13.696375 systemd[1]: Switching root. Jan 23 17:58:13.759317 systemd-journald[259]: Journal stopped Jan 23 17:58:16.523252 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Jan 23 17:58:16.523386 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 17:58:16.523431 kernel: SELinux: policy capability open_perms=1 Jan 23 17:58:16.523464 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 17:58:16.523497 kernel: SELinux: policy capability always_check_network=0 Jan 23 17:58:16.523585 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 17:58:16.523621 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 17:58:16.523659 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 17:58:16.523688 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 17:58:16.523725 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 17:58:16.523756 kernel: audit: type=1403 audit(1769191094.236:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 17:58:16.523797 systemd[1]: Successfully loaded SELinux policy in 130.790ms. Jan 23 17:58:16.523843 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.587ms. Jan 23 17:58:16.523879 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 17:58:16.523910 systemd[1]: Detected virtualization amazon. Jan 23 17:58:16.523941 systemd[1]: Detected architecture arm64. Jan 23 17:58:16.523975 systemd[1]: Detected first boot. Jan 23 17:58:16.524009 systemd[1]: Initializing machine ID from VM UUID. Jan 23 17:58:16.524039 kernel: NET: Registered PF_VSOCK protocol family Jan 23 17:58:16.524072 zram_generator::config[1442]: No configuration found. Jan 23 17:58:16.524120 systemd[1]: Populated /etc with preset unit settings. Jan 23 17:58:16.524152 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 17:58:16.524184 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 17:58:16.524212 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 17:58:16.524242 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 17:58:16.524278 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 17:58:16.524307 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 17:58:16.524338 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 17:58:16.524368 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 17:58:16.524396 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 17:58:16.524428 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 17:58:16.524458 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 17:58:16.524490 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 17:58:16.524524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 17:58:16.524631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 17:58:16.524668 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 17:58:16.524700 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 17:58:16.524733 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 17:58:16.524765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 17:58:16.524803 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 17:58:16.524832 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 17:58:16.524869 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 17:58:16.524901 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 17:58:16.524931 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 17:58:16.524964 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 17:58:16.524997 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 17:58:16.525026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 17:58:16.525056 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 17:58:16.525084 systemd[1]: Reached target slices.target - Slice Units. Jan 23 17:58:16.525116 systemd[1]: Reached target swap.target - Swaps. Jan 23 17:58:16.525151 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 17:58:16.525180 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 17:58:16.525209 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 17:58:16.525238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 17:58:16.525268 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 17:58:16.525295 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 17:58:16.525324 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 17:58:16.525351 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 17:58:16.525383 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 17:58:16.525416 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 17:58:16.525453 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 17:58:16.525483 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 17:58:16.525511 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 17:58:16.526107 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 17:58:16.526157 systemd[1]: Reached target machines.target - Containers. Jan 23 17:58:16.526191 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 17:58:16.526220 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:58:16.526259 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 17:58:16.526291 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 17:58:16.527761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:58:16.527812 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:58:16.527842 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:58:16.527877 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 17:58:16.527908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:58:16.527943 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 17:58:16.527975 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 17:58:16.528014 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 17:58:16.528044 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 17:58:16.528075 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 17:58:16.528109 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:58:16.528144 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 17:58:16.528174 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 17:58:16.528211 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 17:58:16.528245 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 17:58:16.528274 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 17:58:16.528304 kernel: loop: module loaded Jan 23 17:58:16.528332 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 17:58:16.528365 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 17:58:16.528393 systemd[1]: Stopped verity-setup.service. Jan 23 17:58:16.528430 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 17:58:16.528465 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 17:58:16.528498 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 17:58:16.528561 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 17:58:16.528651 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 17:58:16.528701 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 17:58:16.528743 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 17:58:16.528774 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 17:58:16.528805 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 17:58:16.528840 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:58:16.528871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:58:16.528900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:58:16.528929 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:58:16.528960 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:58:16.528990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:58:16.529030 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 17:58:16.529064 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 17:58:16.529093 kernel: fuse: init (API version 7.41) Jan 23 17:58:16.529123 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 17:58:16.529151 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 17:58:16.529181 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 17:58:16.529213 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 17:58:16.529243 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 17:58:16.529276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:58:16.529308 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 17:58:16.529342 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:58:16.529372 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 17:58:16.529407 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:58:16.529513 systemd-journald[1521]: Collecting audit messages is disabled. Jan 23 17:58:16.530703 systemd-journald[1521]: Journal started Jan 23 17:58:16.530794 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec26825426ad7da948b7ee4d752cbc2d) is 8M, max 75.3M, 67.3M free. Jan 23 17:58:15.764001 systemd[1]: Queued start job for default target multi-user.target. Jan 23 17:58:15.780966 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 17:58:15.781940 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 17:58:16.538872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 17:58:16.556801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 17:58:16.565057 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 17:58:16.567460 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 17:58:16.585891 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 17:58:16.589378 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 17:58:16.596681 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 17:58:16.600172 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 17:58:16.611591 kernel: ACPI: bus type drm_connector registered Jan 23 17:58:16.633158 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:58:16.634653 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:58:16.638645 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 17:58:16.666104 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 17:58:16.697485 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 17:58:16.701499 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 17:58:16.706833 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 17:58:16.713885 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 17:58:16.729843 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 17:58:16.747970 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 17:58:16.759040 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 17:58:16.765516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 17:58:16.775115 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 17:58:16.831395 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 17:58:16.839760 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec26825426ad7da948b7ee4d752cbc2d is 82.001ms for 928 entries. Jan 23 17:58:16.839760 systemd-journald[1521]: System Journal (/var/log/journal/ec26825426ad7da948b7ee4d752cbc2d) is 8M, max 195.6M, 187.6M free. Jan 23 17:58:16.937880 systemd-journald[1521]: Received client request to flush runtime journal. Jan 23 17:58:16.937968 kernel: loop1: detected capacity change from 0 to 61264 Jan 23 17:58:16.863823 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 17:58:16.871636 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 17:58:16.947713 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 17:58:16.975642 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 17:58:16.983573 kernel: loop2: detected capacity change from 0 to 207008 Jan 23 17:58:17.004684 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 17:58:17.011847 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 17:58:17.058867 kernel: loop3: detected capacity change from 0 to 119840 Jan 23 17:58:17.068822 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Jan 23 17:58:17.068865 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Jan 23 17:58:17.083651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 17:58:17.167643 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 17:58:17.201587 kernel: loop5: detected capacity change from 0 to 61264 Jan 23 17:58:17.223679 kernel: loop6: detected capacity change from 0 to 207008 Jan 23 17:58:17.271871 kernel: loop7: detected capacity change from 0 to 119840 Jan 23 17:58:17.288232 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 17:58:17.289366 (sd-merge)[1603]: Merged extensions into '/usr'. Jan 23 17:58:17.299936 systemd[1]: Reload requested from client PID 1554 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 17:58:17.300098 systemd[1]: Reloading... Jan 23 17:58:17.519563 zram_generator::config[1632]: No configuration found. Jan 23 17:58:17.987330 systemd[1]: Reloading finished in 686 ms. Jan 23 17:58:18.008708 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 17:58:18.012441 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 17:58:18.032073 systemd[1]: Starting ensure-sysext.service... Jan 23 17:58:18.038883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 17:58:18.047303 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 17:58:18.095372 systemd[1]: Reload requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Jan 23 17:58:18.095407 systemd[1]: Reloading... Jan 23 17:58:18.137438 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 17:58:18.137524 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 17:58:18.138211 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 17:58:18.138864 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 17:58:18.144214 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 17:58:18.144908 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jan 23 17:58:18.145041 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jan 23 17:58:18.161425 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Jan 23 17:58:18.163618 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:58:18.163631 systemd-tmpfiles[1682]: Skipping /boot Jan 23 17:58:18.199078 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 17:58:18.199112 systemd-tmpfiles[1682]: Skipping /boot Jan 23 17:58:18.374566 zram_generator::config[1727]: No configuration found. Jan 23 17:58:18.564172 ldconfig[1546]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 17:58:18.640884 (udev-worker)[1735]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:19.018955 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 17:58:19.020912 systemd[1]: Reloading finished in 924 ms. Jan 23 17:58:19.099119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 17:58:19.109591 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 17:58:19.114042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 17:58:19.210354 systemd[1]: Finished ensure-sysext.service. Jan 23 17:58:19.243607 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:58:19.249902 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 17:58:19.253188 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 17:58:19.257922 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 17:58:19.265967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 17:58:19.270995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 17:58:19.307010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 17:58:19.311190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 17:58:19.311267 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 17:58:19.318956 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 17:58:19.328027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 17:58:19.341102 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 17:58:19.343936 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 17:58:19.353186 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 17:58:19.363964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 17:58:19.368512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 17:58:19.368971 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 17:58:19.390275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 17:58:19.390972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 17:58:19.394763 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 17:58:19.395409 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 17:58:19.399085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 17:58:19.423043 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 17:58:19.445010 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 17:58:19.445671 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 17:58:19.448874 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 17:58:19.498666 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 17:58:19.526195 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 17:58:19.536575 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 17:58:19.624654 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 17:58:19.663562 augenrules[1940]: No rules Jan 23 17:58:19.666799 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:58:19.667461 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:58:19.673707 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 17:58:19.681902 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 17:58:19.705343 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 17:58:19.712043 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 17:58:19.749646 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 17:58:19.764775 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 17:58:19.888563 systemd-networkd[1894]: lo: Link UP Jan 23 17:58:19.888586 systemd-networkd[1894]: lo: Gained carrier Jan 23 17:58:19.891427 systemd-networkd[1894]: Enumeration completed Jan 23 17:58:19.891640 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 17:58:19.896367 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 17:58:19.898351 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:58:19.898359 systemd-networkd[1894]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 17:58:19.902095 systemd-networkd[1894]: eth0: Link UP Jan 23 17:58:19.902375 systemd-networkd[1894]: eth0: Gained carrier Jan 23 17:58:19.902411 systemd-networkd[1894]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 17:58:19.903511 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 17:58:19.913691 systemd-networkd[1894]: eth0: DHCPv4 address 172.31.24.204/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 17:58:19.925865 systemd-resolved[1896]: Positive Trust Anchors: Jan 23 17:58:19.925899 systemd-resolved[1896]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 17:58:19.925960 systemd-resolved[1896]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 17:58:19.943043 systemd-resolved[1896]: Defaulting to hostname 'linux'. Jan 23 17:58:19.946279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 17:58:19.948857 systemd[1]: Reached target network.target - Network. Jan 23 17:58:19.948993 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 17:58:19.961181 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 17:58:19.982604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 17:58:19.986140 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 17:58:19.989017 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 17:58:19.992575 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 17:58:19.995794 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 17:58:19.998361 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 17:58:20.001175 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 17:58:20.005015 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 17:58:20.005085 systemd[1]: Reached target paths.target - Path Units. Jan 23 17:58:20.007267 systemd[1]: Reached target timers.target - Timer Units. Jan 23 17:58:20.011254 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 17:58:20.016947 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 17:58:20.023808 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 17:58:20.027439 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 17:58:20.030249 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 17:58:20.038069 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 17:58:20.041444 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 17:58:20.045778 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 17:58:20.048815 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 17:58:20.051103 systemd[1]: Reached target basic.target - Basic System. Jan 23 17:58:20.053634 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:58:20.053705 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 17:58:20.056757 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 17:58:20.065106 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 17:58:20.074149 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 17:58:20.083699 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 17:58:20.089892 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 17:58:20.093447 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 17:58:20.093798 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 17:58:20.098238 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 17:58:20.103077 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:58:20.123737 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 17:58:20.131958 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 17:58:20.151272 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 17:58:20.162109 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 17:58:20.184995 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 17:58:20.191268 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 17:58:20.193568 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 17:58:20.198038 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 17:58:20.223726 jq[1970]: false Jan 23 17:58:20.225263 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 17:58:20.238391 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 17:58:20.246289 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 17:58:20.254033 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 17:58:20.277605 jq[1983]: true Jan 23 17:58:20.320563 extend-filesystems[1971]: Found /dev/nvme0n1p6 Jan 23 17:58:20.329235 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 17:58:20.330366 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 17:58:20.372994 extend-filesystems[1971]: Found /dev/nvme0n1p9 Jan 23 17:58:20.371848 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 17:58:20.400764 jq[1994]: true Jan 23 17:58:20.401190 extend-filesystems[1971]: Checking size of /dev/nvme0n1p9 Jan 23 17:58:20.411742 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:58:20.416728 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 17:58:20.419113 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:58:20.419113 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:58:20.419113 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: ---------------------------------------------------- Jan 23 17:58:20.419113 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:58:20.419113 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:58:20.419113 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: corporation. Support and training for ntp-4 are Jan 23 17:58:20.419113 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: available at https://www.nwtime.org/support Jan 23 17:58:20.419113 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: ---------------------------------------------------- Jan 23 17:58:20.411862 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:58:20.411881 ntpd[1973]: ---------------------------------------------------- Jan 23 17:58:20.411899 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:58:20.411915 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:58:20.411932 ntpd[1973]: corporation. Support and training for ntp-4 are Jan 23 17:58:20.411948 ntpd[1973]: available at https://www.nwtime.org/support Jan 23 17:58:20.411964 ntpd[1973]: ---------------------------------------------------- Jan 23 17:58:20.432302 ntpd[1973]: proto: precision = 0.096 usec (-23) Jan 23 17:58:20.436337 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: proto: precision = 0.096 usec (-23) Jan 23 17:58:20.438282 (ntainerd)[1996]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 17:58:20.438871 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 17:58:20.459494 ntpd[1973]: basedate set to 2026-01-11 Jan 23 17:58:20.464689 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: basedate set to 2026-01-11 Jan 23 17:58:20.464689 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: gps base set to 2026-01-11 (week 2401) Jan 23 17:58:20.464689 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:58:20.464689 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:58:20.464689 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:58:20.464689 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: Listen normally on 3 eth0 172.31.24.204:123 Jan 23 17:58:20.464689 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jan 23 17:58:20.464689 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: bind(21) AF_INET6 [fe80::43c:c0ff:feeb:c6c1%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:58:20.464689 ntpd[1973]: 23 Jan 17:58:20 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::43c:c0ff:feeb:c6c1%2]:123 Jan 23 17:58:20.459637 ntpd[1973]: gps base set to 2026-01-11 (week 2401) Jan 23 17:58:20.459832 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:58:20.459878 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:58:20.460186 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:58:20.460232 ntpd[1973]: Listen normally on 3 eth0 172.31.24.204:123 Jan 23 17:58:20.460277 ntpd[1973]: Listen normally on 4 lo [::1]:123 Jan 23 17:58:20.460323 ntpd[1973]: bind(21) AF_INET6 [fe80::43c:c0ff:feeb:c6c1%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 17:58:20.460359 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::43c:c0ff:feeb:c6c1%2]:123 Jan 23 17:58:20.465820 extend-filesystems[1971]: Resized partition /dev/nvme0n1p9 Jan 23 17:58:20.476569 extend-filesystems[2024]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 17:58:20.476431 systemd-coredump[2023]: Process 1973 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 17:58:20.490522 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 17:58:20.488940 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 17:58:20.497129 tar[1989]: linux-arm64/LICENSE Jan 23 17:58:20.498680 tar[1989]: linux-arm64/helm Jan 23 17:58:20.502957 systemd[1]: Started systemd-coredump@0-2023-0.service - Process Core Dump (PID 2023/UID 0). Jan 23 17:58:20.546382 dbus-daemon[1968]: [system] SELinux support is enabled Jan 23 17:58:20.548738 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 17:58:20.605215 update_engine[1981]: I20260123 17:58:20.561295 1981 main.cc:92] Flatcar Update Engine starting Jan 23 17:58:20.605215 update_engine[1981]: I20260123 17:58:20.595904 1981 update_check_scheduler.cc:74] Next update check in 4m35s Jan 23 17:58:20.586289 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1894 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 17:58:20.560842 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 17:58:20.560894 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 17:58:20.567141 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 17:58:20.567180 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 17:58:20.607951 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 17:58:20.614616 systemd-logind[1978]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 17:58:20.614667 systemd-logind[1978]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 17:58:20.619871 systemd-logind[1978]: New seat seat0. Jan 23 17:58:20.623870 systemd[1]: Started update-engine.service - Update Engine. Jan 23 17:58:20.645888 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 17:58:20.651631 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 17:58:20.664565 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 17:58:20.684522 extend-filesystems[2024]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 17:58:20.684522 extend-filesystems[2024]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 17:58:20.684522 extend-filesystems[2024]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 17:58:20.704150 extend-filesystems[1971]: Resized filesystem in /dev/nvme0n1p9 Jan 23 17:58:20.691399 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 17:58:20.710967 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 17:58:20.728442 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 17:58:20.736215 bash[2050]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:58:20.738562 coreos-metadata[1967]: Jan 23 17:58:20.737 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:58:20.739775 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 17:58:20.747251 coreos-metadata[1967]: Jan 23 17:58:20.746 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 17:58:20.753633 systemd[1]: Starting sshkeys.service... Jan 23 17:58:20.758311 coreos-metadata[1967]: Jan 23 17:58:20.758 INFO Fetch successful Jan 23 17:58:20.758311 coreos-metadata[1967]: Jan 23 17:58:20.758 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 17:58:20.770482 coreos-metadata[1967]: Jan 23 17:58:20.767 INFO Fetch successful Jan 23 17:58:20.770482 coreos-metadata[1967]: Jan 23 17:58:20.767 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 17:58:20.773890 coreos-metadata[1967]: Jan 23 17:58:20.770 INFO Fetch successful Jan 23 17:58:20.773890 coreos-metadata[1967]: Jan 23 17:58:20.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 17:58:20.782667 coreos-metadata[1967]: Jan 23 17:58:20.778 INFO Fetch successful Jan 23 17:58:20.782667 coreos-metadata[1967]: Jan 23 17:58:20.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 17:58:20.787408 coreos-metadata[1967]: Jan 23 17:58:20.787 INFO Fetch failed with 404: resource not found Jan 23 17:58:20.787408 coreos-metadata[1967]: Jan 23 17:58:20.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 17:58:20.789327 coreos-metadata[1967]: Jan 23 17:58:20.789 INFO Fetch successful Jan 23 17:58:20.789327 coreos-metadata[1967]: Jan 23 17:58:20.789 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 17:58:20.803061 coreos-metadata[1967]: Jan 23 17:58:20.800 INFO Fetch successful Jan 23 17:58:20.803061 coreos-metadata[1967]: Jan 23 17:58:20.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 17:58:20.803777 coreos-metadata[1967]: Jan 23 17:58:20.803 INFO Fetch successful Jan 23 17:58:20.803777 coreos-metadata[1967]: Jan 23 17:58:20.803 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 17:58:20.806073 coreos-metadata[1967]: Jan 23 17:58:20.804 INFO Fetch successful Jan 23 17:58:20.806073 coreos-metadata[1967]: Jan 23 17:58:20.804 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 17:58:20.822125 coreos-metadata[1967]: Jan 23 17:58:20.818 INFO Fetch successful Jan 23 17:58:20.914837 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 17:58:20.923366 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 17:58:20.962199 systemd-networkd[1894]: eth0: Gained IPv6LL Jan 23 17:58:20.974681 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 17:58:20.986370 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 17:58:21.002127 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 17:58:21.012215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:21.023339 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 17:58:21.139160 coreos-metadata[2082]: Jan 23 17:58:21.138 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 17:58:21.146690 coreos-metadata[2082]: Jan 23 17:58:21.141 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 17:58:21.148096 coreos-metadata[2082]: Jan 23 17:58:21.147 INFO Fetch successful Jan 23 17:58:21.148096 coreos-metadata[2082]: Jan 23 17:58:21.147 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 17:58:21.151662 coreos-metadata[2082]: Jan 23 17:58:21.151 INFO Fetch successful Jan 23 17:58:21.155677 unknown[2082]: wrote ssh authorized keys file for user: core Jan 23 17:58:21.215963 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 17:58:21.231038 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 17:58:21.377573 update-ssh-keys[2124]: Updated "/home/core/.ssh/authorized_keys" Jan 23 17:58:21.379917 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 17:58:21.398351 systemd[1]: Finished sshkeys.service. Jan 23 17:58:21.463106 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 17:58:21.671466 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 17:58:21.703788 amazon-ssm-agent[2093]: Initializing new seelog logger Jan 23 17:58:21.704287 amazon-ssm-agent[2093]: New Seelog Logger Creation Complete Jan 23 17:58:21.704287 amazon-ssm-agent[2093]: 2026/01/23 17:58:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:21.704287 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:21.710788 amazon-ssm-agent[2093]: 2026/01/23 17:58:21 processing appconfig overrides Jan 23 17:58:21.710788 amazon-ssm-agent[2093]: 2026/01/23 17:58:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:21.710788 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:21.710788 amazon-ssm-agent[2093]: 2026/01/23 17:58:21 processing appconfig overrides Jan 23 17:58:21.710788 amazon-ssm-agent[2093]: 2026/01/23 17:58:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:21.710788 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:21.710788 amazon-ssm-agent[2093]: 2026/01/23 17:58:21 processing appconfig overrides Jan 23 17:58:21.710788 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.7062 INFO Proxy environment variables: Jan 23 17:58:21.720565 amazon-ssm-agent[2093]: 2026/01/23 17:58:21 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:21.720565 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:21.720565 amazon-ssm-agent[2093]: 2026/01/23 17:58:21 processing appconfig overrides Jan 23 17:58:21.716890 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 17:58:21.737359 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2042 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 17:58:21.754631 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 17:58:21.799233 containerd[1996]: time="2026-01-23T17:58:21Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 17:58:21.808255 containerd[1996]: time="2026-01-23T17:58:21.807870868Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 17:58:21.809450 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.7062 INFO https_proxy: Jan 23 17:58:21.821273 systemd-coredump[2026]: Process 1973 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1973: #0 0x0000aaaae94f0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaae949fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaae94a0240 n/a (ntpd + 0x10240) #3 0x0000aaaae949be14 n/a (ntpd + 0xbe14) #4 0x0000aaaae949d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaae94a5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaae949738c n/a (ntpd + 0x738c) #7 0x0000ffffb4d62034 n/a (libc.so.6 + 0x22034) #8 0x0000ffffb4d62118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaae94973f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Jan 23 17:58:21.867737 systemd[1]: systemd-coredump@0-2023-0.service: Deactivated successfully. Jan 23 17:58:21.878682 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 17:58:21.879014 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 17:58:21.909906 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.7062 INFO http_proxy: Jan 23 17:58:21.929147 containerd[1996]: time="2026-01-23T17:58:21.927147713Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.844µs" Jan 23 17:58:21.929147 containerd[1996]: time="2026-01-23T17:58:21.927218057Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 17:58:21.929147 containerd[1996]: time="2026-01-23T17:58:21.927257465Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.938700125Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.938787881Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.938847305Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.938978285Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.939004193Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.939367325Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.939398369Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.939425321Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.939449453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 17:58:21.940078 containerd[1996]: time="2026-01-23T17:58:21.939635189Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 17:58:21.943723 containerd[1996]: time="2026-01-23T17:58:21.943656089Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:58:21.943885 containerd[1996]: time="2026-01-23T17:58:21.943761293Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 17:58:21.943885 containerd[1996]: time="2026-01-23T17:58:21.943792877Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 17:58:21.943885 containerd[1996]: time="2026-01-23T17:58:21.943871549Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 17:58:21.944553 containerd[1996]: time="2026-01-23T17:58:21.944387741Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 17:58:21.948800 containerd[1996]: time="2026-01-23T17:58:21.948730937Z" level=info msg="metadata content store policy set" policy=shared Jan 23 17:58:21.960100 containerd[1996]: time="2026-01-23T17:58:21.960030581Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960151385Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960187565Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960220709Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960249809Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960311045Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960358877Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960389657Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960417677Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960444329Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960468089Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960497777Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960763169Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960804629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 17:58:21.962017 containerd[1996]: time="2026-01-23T17:58:21.960837581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.960863885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.960892793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.960921077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.960949949Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.960975221Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.961008377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.961036109Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.961062401Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.961410461Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.961444625Z" level=info msg="Start snapshots syncer" Jan 23 17:58:21.963352 containerd[1996]: time="2026-01-23T17:58:21.961501961Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 17:58:21.968231 containerd[1996]: time="2026-01-23T17:58:21.967081901Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 17:58:21.968231 containerd[1996]: time="2026-01-23T17:58:21.967259501Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 17:58:21.968657 containerd[1996]: time="2026-01-23T17:58:21.967383101Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 17:58:21.970396 containerd[1996]: time="2026-01-23T17:58:21.970322597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 17:58:21.970497 containerd[1996]: time="2026-01-23T17:58:21.970416125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 17:58:21.970497 containerd[1996]: time="2026-01-23T17:58:21.970459193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 17:58:21.970627 containerd[1996]: time="2026-01-23T17:58:21.970490369Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 17:58:21.984770 containerd[1996]: time="2026-01-23T17:58:21.984581825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 17:58:21.984770 containerd[1996]: time="2026-01-23T17:58:21.984685577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 17:58:21.984770 containerd[1996]: time="2026-01-23T17:58:21.984735701Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 17:58:21.985089 containerd[1996]: time="2026-01-23T17:58:21.984821333Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 17:58:21.985089 containerd[1996]: time="2026-01-23T17:58:21.984865577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 17:58:21.985089 containerd[1996]: time="2026-01-23T17:58:21.984912665Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 17:58:21.985089 containerd[1996]: time="2026-01-23T17:58:21.985014125Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:58:21.985089 containerd[1996]: time="2026-01-23T17:58:21.985061549Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 17:58:21.985323 containerd[1996]: time="2026-01-23T17:58:21.985096313Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:58:21.985323 containerd[1996]: time="2026-01-23T17:58:21.985138481Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 17:58:21.985323 containerd[1996]: time="2026-01-23T17:58:21.985163501Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 17:58:21.985323 containerd[1996]: time="2026-01-23T17:58:21.985199597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 17:58:21.985323 containerd[1996]: time="2026-01-23T17:58:21.985238393Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 17:58:21.985573 containerd[1996]: time="2026-01-23T17:58:21.985424921Z" level=info msg="runtime interface created" Jan 23 17:58:21.985573 containerd[1996]: time="2026-01-23T17:58:21.985454501Z" level=info msg="created NRI interface" Jan 23 17:58:21.985573 containerd[1996]: time="2026-01-23T17:58:21.985481429Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 17:58:21.987646 containerd[1996]: time="2026-01-23T17:58:21.985525313Z" level=info msg="Connect containerd service" Jan 23 17:58:21.995064 containerd[1996]: time="2026-01-23T17:58:21.994755653Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 17:58:22.004333 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 17:58:22.005191 containerd[1996]: time="2026-01-23T17:58:22.005124601Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:58:22.010551 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.7062 INFO no_proxy: Jan 23 17:58:22.014996 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 17:58:22.121511 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.7064 INFO Checking if agent identity type OnPrem can be assumed Jan 23 17:58:22.201559 ntpd[2197]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: ntpd 4.2.8p18@1.4062-o Fri Jan 23 15:31:01 UTC 2026 (1): Starting Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: ---------------------------------------------------- Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: corporation. Support and training for ntp-4 are Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: available at https://www.nwtime.org/support Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: ---------------------------------------------------- Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: proto: precision = 0.096 usec (-23) Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: basedate set to 2026-01-11 Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: gps base set to 2026-01-11 (week 2401) Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:58:22.204065 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:58:22.201714 ntpd[2197]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 17:58:22.201733 ntpd[2197]: ---------------------------------------------------- Jan 23 17:58:22.201750 ntpd[2197]: ntp-4 is maintained by Network Time Foundation, Jan 23 17:58:22.201766 ntpd[2197]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 17:58:22.201783 ntpd[2197]: corporation. Support and training for ntp-4 are Jan 23 17:58:22.201799 ntpd[2197]: available at https://www.nwtime.org/support Jan 23 17:58:22.201815 ntpd[2197]: ---------------------------------------------------- Jan 23 17:58:22.202861 ntpd[2197]: proto: precision = 0.096 usec (-23) Jan 23 17:58:22.203170 ntpd[2197]: basedate set to 2026-01-11 Jan 23 17:58:22.203189 ntpd[2197]: gps base set to 2026-01-11 (week 2401) Jan 23 17:58:22.215561 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:58:22.215561 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: Listen normally on 3 eth0 172.31.24.204:123 Jan 23 17:58:22.215561 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: Listen normally on 4 lo [::1]:123 Jan 23 17:58:22.215561 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: Listen normally on 5 eth0 [fe80::43c:c0ff:feeb:c6c1%2]:123 Jan 23 17:58:22.215561 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: Listening on routing socket on fd #22 for interface updates Jan 23 17:58:22.203304 ntpd[2197]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 17:58:22.215911 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:58:22.215911 ntpd[2197]: 23 Jan 17:58:22 ntpd[2197]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:58:22.203347 ntpd[2197]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 17:58:22.205705 ntpd[2197]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 17:58:22.205761 ntpd[2197]: Listen normally on 3 eth0 172.31.24.204:123 Jan 23 17:58:22.205809 ntpd[2197]: Listen normally on 4 lo [::1]:123 Jan 23 17:58:22.205854 ntpd[2197]: Listen normally on 5 eth0 [fe80::43c:c0ff:feeb:c6c1%2]:123 Jan 23 17:58:22.205899 ntpd[2197]: Listening on routing socket on fd #22 for interface updates Jan 23 17:58:22.215727 ntpd[2197]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:58:22.215777 ntpd[2197]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 17:58:22.218598 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.7066 INFO Checking if agent identity type EC2 can be assumed Jan 23 17:58:22.317354 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.9751 INFO Agent will take identity from EC2 Jan 23 17:58:22.327214 locksmithd[2046]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 17:58:22.392586 containerd[1996]: time="2026-01-23T17:58:22.386693007Z" level=info msg="Start subscribing containerd event" Jan 23 17:58:22.392586 containerd[1996]: time="2026-01-23T17:58:22.386805615Z" level=info msg="Start recovering state" Jan 23 17:58:22.392586 containerd[1996]: time="2026-01-23T17:58:22.392188539Z" level=info msg="Start event monitor" Jan 23 17:58:22.392586 containerd[1996]: time="2026-01-23T17:58:22.392232063Z" level=info msg="Start cni network conf syncer for default" Jan 23 17:58:22.392586 containerd[1996]: time="2026-01-23T17:58:22.392253135Z" level=info msg="Start streaming server" Jan 23 17:58:22.392586 containerd[1996]: time="2026-01-23T17:58:22.392274411Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 17:58:22.392586 containerd[1996]: time="2026-01-23T17:58:22.392292303Z" level=info msg="runtime interface starting up..." Jan 23 17:58:22.392586 containerd[1996]: time="2026-01-23T17:58:22.392307927Z" level=info msg="starting plugins..." Jan 23 17:58:22.392586 containerd[1996]: time="2026-01-23T17:58:22.392337819Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 17:58:22.393065 containerd[1996]: time="2026-01-23T17:58:22.392890635Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 17:58:22.393065 containerd[1996]: time="2026-01-23T17:58:22.392989635Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 17:58:22.393065 containerd[1996]: time="2026-01-23T17:58:22.393094959Z" level=info msg="containerd successfully booted in 0.601903s" Jan 23 17:58:22.393220 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 17:58:22.413100 polkitd[2185]: Started polkitd version 126 Jan 23 17:58:22.416682 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.9768 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 17:58:22.444657 polkitd[2185]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 17:58:22.445448 polkitd[2185]: Loading rules from directory /run/polkit-1/rules.d Jan 23 17:58:22.445562 polkitd[2185]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:58:22.446206 polkitd[2185]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 17:58:22.446254 polkitd[2185]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 17:58:22.446338 polkitd[2185]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 17:58:22.450298 polkitd[2185]: Finished loading, compiling and executing 2 rules Jan 23 17:58:22.451036 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 17:58:22.461351 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 17:58:22.466615 polkitd[2185]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 17:58:22.515619 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.9768 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 17:58:22.536648 systemd-hostnamed[2042]: Hostname set to (transient) Jan 23 17:58:22.537092 systemd-resolved[1896]: System hostname changed to 'ip-172-31-24-204'. Jan 23 17:58:22.602266 amazon-ssm-agent[2093]: 2026/01/23 17:58:22 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:22.602266 amazon-ssm-agent[2093]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 17:58:22.602266 amazon-ssm-agent[2093]: 2026/01/23 17:58:22 processing appconfig overrides Jan 23 17:58:22.616327 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.9768 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.9768 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.9768 INFO [Registrar] Starting registrar module Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.9812 INFO [EC2Identity] Checking disk for registration info Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.9812 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:21.9813 INFO [EC2Identity] Generating registration keypair Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:22.5576 INFO [EC2Identity] Checking write access before registering Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:22.5583 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:22.6000 INFO [EC2Identity] EC2 registration was successful. Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:22.6015 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:22.6017 INFO [CredentialRefresher] credentialRefresher has started Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:22.6017 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:22.6297 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 17:58:22.630269 amazon-ssm-agent[2093]: 2026-01-23 17:58:22.6299 INFO [CredentialRefresher] Credentials ready Jan 23 17:58:22.633723 sshd_keygen[2014]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 17:58:22.702432 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 17:58:22.714488 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 17:58:22.717902 amazon-ssm-agent[2093]: 2026-01-23 17:58:22.6326 INFO [CredentialRefresher] Next credential rotation will be in 29.9999515336 minutes Jan 23 17:58:22.721145 systemd[1]: Started sshd@0-172.31.24.204:22-68.220.241.50:60936.service - OpenSSH per-connection server daemon (68.220.241.50:60936). Jan 23 17:58:22.762453 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 17:58:22.763228 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 17:58:22.773115 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 17:58:22.824050 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 17:58:22.834263 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 17:58:22.843120 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 17:58:22.846048 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 17:58:22.966799 tar[1989]: linux-arm64/README.md Jan 23 17:58:22.997623 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 17:58:23.341899 sshd[2231]: Accepted publickey for core from 68.220.241.50 port 60936 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:23.348778 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:23.364121 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 17:58:23.370676 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 17:58:23.400375 systemd-logind[1978]: New session 1 of user core. Jan 23 17:58:23.418612 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 17:58:23.429020 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 17:58:23.452032 (systemd)[2246]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 17:58:23.458001 systemd-logind[1978]: New session c1 of user core. Jan 23 17:58:23.576773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:23.580482 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 17:58:23.602573 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:23.689128 amazon-ssm-agent[2093]: 2026-01-23 17:58:23.6889 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 17:58:23.775088 systemd[2246]: Queued start job for default target default.target. Jan 23 17:58:23.781847 systemd[2246]: Created slice app.slice - User Application Slice. Jan 23 17:58:23.781914 systemd[2246]: Reached target paths.target - Paths. Jan 23 17:58:23.782002 systemd[2246]: Reached target timers.target - Timers. Jan 23 17:58:23.784779 systemd[2246]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 17:58:23.790480 amazon-ssm-agent[2093]: 2026-01-23 17:58:23.6977 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2260) started Jan 23 17:58:23.820075 systemd[2246]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 17:58:23.821687 systemd[2246]: Reached target sockets.target - Sockets. Jan 23 17:58:23.821841 systemd[2246]: Reached target basic.target - Basic System. Jan 23 17:58:23.821920 systemd[2246]: Reached target default.target - Main User Target. Jan 23 17:58:23.821981 systemd[2246]: Startup finished in 348ms. Jan 23 17:58:23.822213 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 17:58:23.831986 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 17:58:23.836008 systemd[1]: Startup finished in 3.807s (kernel) + 12.470s (initrd) + 9.731s (userspace) = 26.009s. Jan 23 17:58:23.891687 amazon-ssm-agent[2093]: 2026-01-23 17:58:23.6978 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 17:58:24.212960 systemd[1]: Started sshd@1-172.31.24.204:22-68.220.241.50:58090.service - OpenSSH per-connection server daemon (68.220.241.50:58090). Jan 23 17:58:24.646304 kubelet[2257]: E0123 17:58:24.646131 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:24.651958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:24.652815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:24.653844 systemd[1]: kubelet.service: Consumed 1.510s CPU time, 257.1M memory peak. Jan 23 17:58:24.802711 sshd[2284]: Accepted publickey for core from 68.220.241.50 port 58090 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:24.805095 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:24.814407 systemd-logind[1978]: New session 2 of user core. Jan 23 17:58:24.820802 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 17:58:25.163628 sshd[2289]: Connection closed by 68.220.241.50 port 58090 Jan 23 17:58:25.164856 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:25.172820 systemd[1]: sshd@1-172.31.24.204:22-68.220.241.50:58090.service: Deactivated successfully. Jan 23 17:58:25.176330 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 17:58:25.180268 systemd-logind[1978]: Session 2 logged out. Waiting for processes to exit. Jan 23 17:58:25.183452 systemd-logind[1978]: Removed session 2. Jan 23 17:58:25.254906 systemd[1]: Started sshd@2-172.31.24.204:22-68.220.241.50:58092.service - OpenSSH per-connection server daemon (68.220.241.50:58092). Jan 23 17:58:25.788513 sshd[2295]: Accepted publickey for core from 68.220.241.50 port 58092 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:25.790210 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:25.800404 systemd-logind[1978]: New session 3 of user core. Jan 23 17:58:25.805839 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 17:58:26.136656 sshd[2298]: Connection closed by 68.220.241.50 port 58092 Jan 23 17:58:26.135614 sshd-session[2295]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:26.142322 systemd[1]: sshd@2-172.31.24.204:22-68.220.241.50:58092.service: Deactivated successfully. Jan 23 17:58:26.146346 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 17:58:26.150666 systemd-logind[1978]: Session 3 logged out. Waiting for processes to exit. Jan 23 17:58:26.153023 systemd-logind[1978]: Removed session 3. Jan 23 17:58:26.228058 systemd[1]: Started sshd@3-172.31.24.204:22-68.220.241.50:58096.service - OpenSSH per-connection server daemon (68.220.241.50:58096). Jan 23 17:58:26.751142 sshd[2304]: Accepted publickey for core from 68.220.241.50 port 58096 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:26.753288 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:26.761096 systemd-logind[1978]: New session 4 of user core. Jan 23 17:58:26.769805 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 17:58:27.106273 sshd[2307]: Connection closed by 68.220.241.50 port 58096 Jan 23 17:58:27.105311 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:27.113016 systemd[1]: sshd@3-172.31.24.204:22-68.220.241.50:58096.service: Deactivated successfully. Jan 23 17:58:27.117937 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 17:58:27.119921 systemd-logind[1978]: Session 4 logged out. Waiting for processes to exit. Jan 23 17:58:27.123221 systemd-logind[1978]: Removed session 4. Jan 23 17:58:27.200755 systemd[1]: Started sshd@4-172.31.24.204:22-68.220.241.50:58098.service - OpenSSH per-connection server daemon (68.220.241.50:58098). Jan 23 17:58:27.728817 sshd[2313]: Accepted publickey for core from 68.220.241.50 port 58098 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:27.730950 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:27.739621 systemd-logind[1978]: New session 5 of user core. Jan 23 17:58:27.750781 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 17:58:28.043510 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 17:58:28.044794 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:58:28.077498 sudo[2317]: pam_unix(sudo:session): session closed for user root Jan 23 17:58:28.155300 sshd[2316]: Connection closed by 68.220.241.50 port 58098 Jan 23 17:58:28.156356 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:28.163834 systemd[1]: sshd@4-172.31.24.204:22-68.220.241.50:58098.service: Deactivated successfully. Jan 23 17:58:28.169138 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 17:58:28.173352 systemd-logind[1978]: Session 5 logged out. Waiting for processes to exit. Jan 23 17:58:28.175779 systemd-logind[1978]: Removed session 5. Jan 23 17:58:28.250374 systemd[1]: Started sshd@5-172.31.24.204:22-68.220.241.50:58108.service - OpenSSH per-connection server daemon (68.220.241.50:58108). Jan 23 17:58:28.779129 sshd[2323]: Accepted publickey for core from 68.220.241.50 port 58108 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:28.781622 sshd-session[2323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:28.790234 systemd-logind[1978]: New session 6 of user core. Jan 23 17:58:28.798858 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 17:58:29.057445 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 17:58:29.058912 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:58:29.067505 sudo[2328]: pam_unix(sudo:session): session closed for user root Jan 23 17:58:29.078430 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 17:58:29.079199 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:58:29.098272 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 17:58:29.170956 augenrules[2350]: No rules Jan 23 17:58:29.173817 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 17:58:29.174786 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 17:58:29.176747 sudo[2327]: pam_unix(sudo:session): session closed for user root Jan 23 17:58:28.892041 systemd-resolved[1896]: Clock change detected. Flushing caches. Jan 23 17:58:28.899884 systemd-journald[1521]: Time jumped backwards, rotating. Jan 23 17:58:28.943160 sshd[2326]: Connection closed by 68.220.241.50 port 58108 Jan 23 17:58:28.944814 sshd-session[2323]: pam_unix(sshd:session): session closed for user core Jan 23 17:58:28.952928 systemd[1]: sshd@5-172.31.24.204:22-68.220.241.50:58108.service: Deactivated successfully. Jan 23 17:58:28.957103 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 17:58:28.959644 systemd-logind[1978]: Session 6 logged out. Waiting for processes to exit. Jan 23 17:58:28.962795 systemd-logind[1978]: Removed session 6. Jan 23 17:58:29.040672 systemd[1]: Started sshd@6-172.31.24.204:22-68.220.241.50:58118.service - OpenSSH per-connection server daemon (68.220.241.50:58118). Jan 23 17:58:29.580774 sshd[2360]: Accepted publickey for core from 68.220.241.50 port 58118 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 17:58:29.583154 sshd-session[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 17:58:29.591469 systemd-logind[1978]: New session 7 of user core. Jan 23 17:58:29.603794 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 17:58:29.863756 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 17:58:29.865040 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 17:58:30.622935 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 17:58:30.638466 (dockerd)[2381]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 17:58:31.194795 dockerd[2381]: time="2026-01-23T17:58:31.194571111Z" level=info msg="Starting up" Jan 23 17:58:31.198537 dockerd[2381]: time="2026-01-23T17:58:31.197960547Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 17:58:31.221383 dockerd[2381]: time="2026-01-23T17:58:31.221296899Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 17:58:31.263743 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1306640563-merged.mount: Deactivated successfully. Jan 23 17:58:31.333047 systemd[1]: var-lib-docker-metacopy\x2dcheck993433913-merged.mount: Deactivated successfully. Jan 23 17:58:31.348037 dockerd[2381]: time="2026-01-23T17:58:31.347759632Z" level=info msg="Loading containers: start." Jan 23 17:58:31.361574 kernel: Initializing XFRM netlink socket Jan 23 17:58:31.777824 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:58:31.864058 systemd-networkd[1894]: docker0: Link UP Jan 23 17:58:31.883562 dockerd[2381]: time="2026-01-23T17:58:31.883388082Z" level=info msg="Loading containers: done." Jan 23 17:58:31.911555 dockerd[2381]: time="2026-01-23T17:58:31.911142198Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 17:58:31.911555 dockerd[2381]: time="2026-01-23T17:58:31.911259787Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 17:58:31.911555 dockerd[2381]: time="2026-01-23T17:58:31.911421019Z" level=info msg="Initializing buildkit" Jan 23 17:58:31.955628 dockerd[2381]: time="2026-01-23T17:58:31.955550371Z" level=info msg="Completed buildkit initialization" Jan 23 17:58:31.973116 dockerd[2381]: time="2026-01-23T17:58:31.973033591Z" level=info msg="Daemon has completed initialization" Jan 23 17:58:31.973786 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 17:58:31.975730 dockerd[2381]: time="2026-01-23T17:58:31.974774719Z" level=info msg="API listen on /run/docker.sock" Jan 23 17:58:32.252896 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1284709081-merged.mount: Deactivated successfully. Jan 23 17:58:33.175547 containerd[1996]: time="2026-01-23T17:58:33.174936989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 17:58:33.820129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount694422493.mount: Deactivated successfully. Jan 23 17:58:34.592703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 17:58:34.596789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:35.253777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:35.271471 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:35.402683 kubelet[2659]: E0123 17:58:35.402612 2659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:35.412339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:35.413660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:35.414885 systemd[1]: kubelet.service: Consumed 405ms CPU time, 107.7M memory peak. Jan 23 17:58:35.657937 containerd[1996]: time="2026-01-23T17:58:35.656151381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:35.659150 containerd[1996]: time="2026-01-23T17:58:35.659089761Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 17:58:35.662123 containerd[1996]: time="2026-01-23T17:58:35.662055585Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:35.670236 containerd[1996]: time="2026-01-23T17:58:35.670177257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:35.672594 containerd[1996]: time="2026-01-23T17:58:35.672469377Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.497461156s" Jan 23 17:58:35.672594 containerd[1996]: time="2026-01-23T17:58:35.672585177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 17:58:35.673556 containerd[1996]: time="2026-01-23T17:58:35.673472361Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 17:58:37.247988 containerd[1996]: time="2026-01-23T17:58:37.247881873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:37.252694 containerd[1996]: time="2026-01-23T17:58:37.252605889Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 17:58:37.261042 containerd[1996]: time="2026-01-23T17:58:37.260943465Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:37.267856 containerd[1996]: time="2026-01-23T17:58:37.267746937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:37.270223 containerd[1996]: time="2026-01-23T17:58:37.270000261Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.596433376s" Jan 23 17:58:37.270223 containerd[1996]: time="2026-01-23T17:58:37.270072717Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 17:58:37.271297 containerd[1996]: time="2026-01-23T17:58:37.270971157Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 17:58:38.630180 containerd[1996]: time="2026-01-23T17:58:38.630083652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:38.633065 containerd[1996]: time="2026-01-23T17:58:38.632534088Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 17:58:38.634114 containerd[1996]: time="2026-01-23T17:58:38.634037172Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:38.640186 containerd[1996]: time="2026-01-23T17:58:38.640122972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:38.642461 containerd[1996]: time="2026-01-23T17:58:38.642376884Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.371347671s" Jan 23 17:58:38.642461 containerd[1996]: time="2026-01-23T17:58:38.642447792Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 17:58:38.643116 containerd[1996]: time="2026-01-23T17:58:38.643053432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 17:58:39.866744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367572057.mount: Deactivated successfully. Jan 23 17:58:40.438686 containerd[1996]: time="2026-01-23T17:58:40.438628261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:40.440888 containerd[1996]: time="2026-01-23T17:58:40.440832973Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 17:58:40.442702 containerd[1996]: time="2026-01-23T17:58:40.442629157Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:40.446521 containerd[1996]: time="2026-01-23T17:58:40.445058593Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:40.446521 containerd[1996]: time="2026-01-23T17:58:40.446328301Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.803209325s" Jan 23 17:58:40.446521 containerd[1996]: time="2026-01-23T17:58:40.446373649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 17:58:40.447597 containerd[1996]: time="2026-01-23T17:58:40.447546109Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 17:58:40.915986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580433908.mount: Deactivated successfully. Jan 23 17:58:42.015940 containerd[1996]: time="2026-01-23T17:58:42.014566729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:42.016613 containerd[1996]: time="2026-01-23T17:58:42.016292989Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 17:58:42.018342 containerd[1996]: time="2026-01-23T17:58:42.017470357Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:42.022514 containerd[1996]: time="2026-01-23T17:58:42.022444681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:42.029322 containerd[1996]: time="2026-01-23T17:58:42.029183737Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.581435164s" Jan 23 17:58:42.029322 containerd[1996]: time="2026-01-23T17:58:42.029268613Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 17:58:42.031921 containerd[1996]: time="2026-01-23T17:58:42.031734205Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 17:58:42.498040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805108766.mount: Deactivated successfully. Jan 23 17:58:42.506859 containerd[1996]: time="2026-01-23T17:58:42.506779647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:42.508807 containerd[1996]: time="2026-01-23T17:58:42.508425339Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 17:58:42.509880 containerd[1996]: time="2026-01-23T17:58:42.509823363Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:42.513212 containerd[1996]: time="2026-01-23T17:58:42.513162867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 17:58:42.514770 containerd[1996]: time="2026-01-23T17:58:42.514711131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 482.895098ms" Jan 23 17:58:42.514770 containerd[1996]: time="2026-01-23T17:58:42.514766355Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 17:58:42.515408 containerd[1996]: time="2026-01-23T17:58:42.515356179Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 17:58:42.994413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095768529.mount: Deactivated successfully. Jan 23 17:58:45.255636 containerd[1996]: time="2026-01-23T17:58:45.255554093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:45.258942 containerd[1996]: time="2026-01-23T17:58:45.258877121Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 17:58:45.260046 containerd[1996]: time="2026-01-23T17:58:45.259983497Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:45.266602 containerd[1996]: time="2026-01-23T17:58:45.266485841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:58:45.270573 containerd[1996]: time="2026-01-23T17:58:45.270474161Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.755057958s" Jan 23 17:58:45.270573 containerd[1996]: time="2026-01-23T17:58:45.270564869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 17:58:45.663223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 17:58:45.666752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:46.041719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:46.055241 (kubelet)[2820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 17:58:46.128305 kubelet[2820]: E0123 17:58:46.128233 2820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 17:58:46.134750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 17:58:46.135042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 17:58:46.136169 systemd[1]: kubelet.service: Consumed 304ms CPU time, 107.1M memory peak. Jan 23 17:58:52.264497 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 17:58:52.279928 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:52.281159 systemd[1]: kubelet.service: Consumed 304ms CPU time, 107.1M memory peak. Jan 23 17:58:52.286124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:52.337592 systemd[1]: Reload requested from client PID 2837 ('systemctl') (unit session-7.scope)... Jan 23 17:58:52.337781 systemd[1]: Reloading... Jan 23 17:58:52.575567 zram_generator::config[2884]: No configuration found. Jan 23 17:58:53.035163 systemd[1]: Reloading finished in 696 ms. Jan 23 17:58:53.136611 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 17:58:53.136964 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 17:58:53.137715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:53.137800 systemd[1]: kubelet.service: Consumed 225ms CPU time, 94.9M memory peak. Jan 23 17:58:53.141560 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:58:53.169203 systemd[1]: Started sshd@7-172.31.24.204:22-98.209.161.197:45810.service - OpenSSH per-connection server daemon (98.209.161.197:45810). Jan 23 17:58:53.255124 sshd[2940]: Connection closed by 98.209.161.197 port 45810 Jan 23 17:58:53.257297 systemd[1]: sshd@7-172.31.24.204:22-98.209.161.197:45810.service: Deactivated successfully. Jan 23 17:58:53.502168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:58:53.526099 (kubelet)[2949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:58:53.599473 kubelet[2949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:58:53.599473 kubelet[2949]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:58:53.599473 kubelet[2949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:58:53.600036 kubelet[2949]: I0123 17:58:53.599622 2949 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:58:54.716424 kubelet[2949]: I0123 17:58:54.716338 2949 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:58:54.716424 kubelet[2949]: I0123 17:58:54.716394 2949 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:58:54.717056 kubelet[2949]: I0123 17:58:54.716934 2949 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:58:54.783319 kubelet[2949]: E0123 17:58:54.783257 2949 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.204:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:58:54.787926 kubelet[2949]: I0123 17:58:54.787852 2949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:58:54.801656 kubelet[2949]: I0123 17:58:54.801603 2949 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:58:54.808155 kubelet[2949]: I0123 17:58:54.808099 2949 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:58:54.810195 kubelet[2949]: I0123 17:58:54.810110 2949 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:58:54.810479 kubelet[2949]: I0123 17:58:54.810179 2949 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:58:54.810690 kubelet[2949]: I0123 17:58:54.810634 2949 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:58:54.810690 kubelet[2949]: I0123 17:58:54.810656 2949 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:58:54.811044 kubelet[2949]: I0123 17:58:54.810992 2949 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:54.818044 kubelet[2949]: I0123 17:58:54.818000 2949 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:58:54.818044 kubelet[2949]: I0123 17:58:54.818046 2949 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:58:54.820699 kubelet[2949]: I0123 17:58:54.818095 2949 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:58:54.820699 kubelet[2949]: I0123 17:58:54.818116 2949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:58:54.836420 kubelet[2949]: W0123 17:58:54.836330 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-204&limit=500&resourceVersion=0": dial tcp 172.31.24.204:6443: connect: connection refused Jan 23 17:58:54.836588 kubelet[2949]: E0123 17:58:54.836463 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-204&limit=500&resourceVersion=0\": dial tcp 172.31.24.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:58:54.836792 kubelet[2949]: W0123 17:58:54.836728 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.204:6443: connect: connection refused Jan 23 17:58:54.836943 kubelet[2949]: E0123 17:58:54.836900 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:58:54.838854 kubelet[2949]: I0123 17:58:54.838778 2949 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:58:54.840552 kubelet[2949]: I0123 17:58:54.840487 2949 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:58:54.840904 kubelet[2949]: W0123 17:58:54.840881 2949 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 17:58:54.842692 kubelet[2949]: I0123 17:58:54.842663 2949 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:58:54.842893 kubelet[2949]: I0123 17:58:54.842875 2949 server.go:1287] "Started kubelet" Jan 23 17:58:54.845493 kubelet[2949]: I0123 17:58:54.845456 2949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:58:54.855065 kubelet[2949]: E0123 17:58:54.854228 2949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.204:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.204:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-204.188d6df6ac13b518 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-204,UID:ip-172-31-24-204,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-204,},FirstTimestamp:2026-01-23 17:58:54.842828056 +0000 UTC m=+1.311220435,LastTimestamp:2026-01-23 17:58:54.842828056 +0000 UTC m=+1.311220435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-204,}" Jan 23 17:58:54.855938 kubelet[2949]: I0123 17:58:54.855862 2949 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:58:54.856457 kubelet[2949]: I0123 17:58:54.856414 2949 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:58:54.858569 kubelet[2949]: I0123 17:58:54.858490 2949 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:58:54.858755 kubelet[2949]: E0123 17:58:54.858726 2949 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-204\" not found" Jan 23 17:58:54.859360 kubelet[2949]: I0123 17:58:54.859323 2949 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:58:54.859640 kubelet[2949]: I0123 17:58:54.859619 2949 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:58:54.860403 kubelet[2949]: I0123 17:58:54.860308 2949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:58:54.860823 kubelet[2949]: I0123 17:58:54.860731 2949 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:58:54.861206 kubelet[2949]: I0123 17:58:54.861156 2949 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:58:54.861725 kubelet[2949]: W0123 17:58:54.861442 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.204:6443: connect: connection refused Jan 23 17:58:54.862092 kubelet[2949]: E0123 17:58:54.861902 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:58:54.864907 kubelet[2949]: E0123 17:58:54.862303 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-204?timeout=10s\": dial tcp 172.31.24.204:6443: connect: connection refused" interval="200ms" Jan 23 17:58:54.865347 kubelet[2949]: I0123 17:58:54.864050 2949 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:58:54.866532 kubelet[2949]: I0123 17:58:54.866385 2949 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:58:54.869422 kubelet[2949]: E0123 17:58:54.868914 2949 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:58:54.870272 kubelet[2949]: I0123 17:58:54.870240 2949 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:58:54.896647 kubelet[2949]: I0123 17:58:54.896535 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:58:54.906567 kubelet[2949]: I0123 17:58:54.906384 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:58:54.906567 kubelet[2949]: I0123 17:58:54.906430 2949 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:58:54.906567 kubelet[2949]: I0123 17:58:54.906466 2949 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:58:54.906821 kubelet[2949]: I0123 17:58:54.906493 2949 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:58:54.906994 kubelet[2949]: E0123 17:58:54.906955 2949 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:58:54.909094 kubelet[2949]: I0123 17:58:54.909036 2949 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:58:54.909094 kubelet[2949]: I0123 17:58:54.909083 2949 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:58:54.909296 kubelet[2949]: I0123 17:58:54.909117 2949 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:58:54.913668 kubelet[2949]: W0123 17:58:54.912971 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.204:6443: connect: connection refused Jan 23 17:58:54.913668 kubelet[2949]: E0123 17:58:54.913072 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:58:54.913852 kubelet[2949]: I0123 17:58:54.913760 2949 policy_none.go:49] "None policy: Start" Jan 23 17:58:54.913852 kubelet[2949]: I0123 17:58:54.913797 2949 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:58:54.913852 kubelet[2949]: I0123 17:58:54.913820 2949 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:58:54.925297 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 17:58:54.945781 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 17:58:54.953588 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 17:58:54.959248 kubelet[2949]: E0123 17:58:54.959198 2949 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-204\" not found" Jan 23 17:58:54.971383 kubelet[2949]: I0123 17:58:54.971256 2949 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:58:54.972475 kubelet[2949]: I0123 17:58:54.972136 2949 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:58:54.972475 kubelet[2949]: I0123 17:58:54.972173 2949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:58:54.974154 kubelet[2949]: I0123 17:58:54.974127 2949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:58:54.977674 kubelet[2949]: E0123 17:58:54.977486 2949 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:58:54.980075 kubelet[2949]: E0123 17:58:54.980033 2949 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-204\" not found" Jan 23 17:58:55.029710 systemd[1]: Created slice kubepods-burstable-podfd018a0d9ff695f9374bd8459f55a39c.slice - libcontainer container kubepods-burstable-podfd018a0d9ff695f9374bd8459f55a39c.slice. Jan 23 17:58:55.045639 kubelet[2949]: E0123 17:58:55.044182 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:55.053238 systemd[1]: Created slice kubepods-burstable-pod3bfefdbb5b498b73b7b4d036d57e8dc9.slice - libcontainer container kubepods-burstable-pod3bfefdbb5b498b73b7b4d036d57e8dc9.slice. Jan 23 17:58:55.058623 kubelet[2949]: E0123 17:58:55.058588 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:55.061664 kubelet[2949]: I0123 17:58:55.061614 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:58:55.061919 kubelet[2949]: I0123 17:58:55.061892 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:58:55.062213 kubelet[2949]: I0123 17:58:55.062188 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3bfefdbb5b498b73b7b4d036d57e8dc9-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-204\" (UID: \"3bfefdbb5b498b73b7b4d036d57e8dc9\") " pod="kube-system/kube-scheduler-ip-172-31-24-204" Jan 23 17:58:55.062396 kubelet[2949]: I0123 17:58:55.062345 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f8846ac24a579e5d6c0333a0237c696-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-204\" (UID: \"3f8846ac24a579e5d6c0333a0237c696\") " pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:58:55.062637 kubelet[2949]: I0123 17:58:55.062594 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:58:55.062865 kubelet[2949]: I0123 17:58:55.062819 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:58:55.063046 kubelet[2949]: I0123 17:58:55.062992 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:58:55.063225 kubelet[2949]: I0123 17:58:55.063172 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f8846ac24a579e5d6c0333a0237c696-ca-certs\") pod \"kube-apiserver-ip-172-31-24-204\" (UID: \"3f8846ac24a579e5d6c0333a0237c696\") " pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:58:55.063395 kubelet[2949]: I0123 17:58:55.063351 2949 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f8846ac24a579e5d6c0333a0237c696-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-204\" (UID: \"3f8846ac24a579e5d6c0333a0237c696\") " pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:58:55.065179 systemd[1]: Created slice kubepods-burstable-pod3f8846ac24a579e5d6c0333a0237c696.slice - libcontainer container kubepods-burstable-pod3f8846ac24a579e5d6c0333a0237c696.slice. Jan 23 17:58:55.066141 kubelet[2949]: E0123 17:58:55.066072 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-204?timeout=10s\": dial tcp 172.31.24.204:6443: connect: connection refused" interval="400ms" Jan 23 17:58:55.068978 kubelet[2949]: E0123 17:58:55.068944 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:55.075862 kubelet[2949]: I0123 17:58:55.075729 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-204" Jan 23 17:58:55.077236 kubelet[2949]: E0123 17:58:55.077115 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.204:6443/api/v1/nodes\": dial tcp 172.31.24.204:6443: connect: connection refused" node="ip-172-31-24-204" Jan 23 17:58:55.280285 kubelet[2949]: I0123 17:58:55.279806 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-204" Jan 23 17:58:55.280407 kubelet[2949]: E0123 17:58:55.280286 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.204:6443/api/v1/nodes\": dial tcp 172.31.24.204:6443: connect: connection refused" node="ip-172-31-24-204" Jan 23 17:58:55.347182 containerd[1996]: time="2026-01-23T17:58:55.346335087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-204,Uid:fd018a0d9ff695f9374bd8459f55a39c,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:55.361456 containerd[1996]: time="2026-01-23T17:58:55.361193319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-204,Uid:3bfefdbb5b498b73b7b4d036d57e8dc9,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:55.372670 containerd[1996]: time="2026-01-23T17:58:55.372573999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-204,Uid:3f8846ac24a579e5d6c0333a0237c696,Namespace:kube-system,Attempt:0,}" Jan 23 17:58:55.467457 kubelet[2949]: E0123 17:58:55.467394 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-204?timeout=10s\": dial tcp 172.31.24.204:6443: connect: connection refused" interval="800ms" Jan 23 17:58:55.472111 containerd[1996]: time="2026-01-23T17:58:55.471991576Z" level=info msg="connecting to shim 6b6fe58d8ad26a22d2e1184ab7b13028d2dc6943a86d977d77bcc76ce3a6e6d7" address="unix:///run/containerd/s/5c26172bcec5046831c619207faee62226ac0b21ac35f4f99a3d6e9e00d567dc" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:55.492463 containerd[1996]: time="2026-01-23T17:58:55.491021416Z" level=info msg="connecting to shim 11d1b0d9286bbfb58550ada5f42454199ec4813640e151edfee83ad27ef9eff1" address="unix:///run/containerd/s/4fc8089bc03ca8188834c103e21cf41881060119824daf6d74831f42bacb9e89" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:55.557191 containerd[1996]: time="2026-01-23T17:58:55.556794184Z" level=info msg="connecting to shim 968221c2cc316ac5bb723fcd1b56fe897741628c3c41488d6149fb5ba875b3ac" address="unix:///run/containerd/s/8d420e145f4230f256927522358ced3fe2ee58ac8312aa2c73fbea2c4eea866c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:58:55.570190 systemd[1]: Started cri-containerd-6b6fe58d8ad26a22d2e1184ab7b13028d2dc6943a86d977d77bcc76ce3a6e6d7.scope - libcontainer container 6b6fe58d8ad26a22d2e1184ab7b13028d2dc6943a86d977d77bcc76ce3a6e6d7. Jan 23 17:58:55.609805 systemd[1]: Started cri-containerd-11d1b0d9286bbfb58550ada5f42454199ec4813640e151edfee83ad27ef9eff1.scope - libcontainer container 11d1b0d9286bbfb58550ada5f42454199ec4813640e151edfee83ad27ef9eff1. Jan 23 17:58:55.644822 systemd[1]: Started cri-containerd-968221c2cc316ac5bb723fcd1b56fe897741628c3c41488d6149fb5ba875b3ac.scope - libcontainer container 968221c2cc316ac5bb723fcd1b56fe897741628c3c41488d6149fb5ba875b3ac. Jan 23 17:58:55.692312 kubelet[2949]: I0123 17:58:55.691956 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-204" Jan 23 17:58:55.694342 kubelet[2949]: E0123 17:58:55.694258 2949 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.204:6443/api/v1/nodes\": dial tcp 172.31.24.204:6443: connect: connection refused" node="ip-172-31-24-204" Jan 23 17:58:55.712734 kubelet[2949]: W0123 17:58:55.712460 2949 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.204:6443: connect: connection refused Jan 23 17:58:55.714454 kubelet[2949]: E0123 17:58:55.714337 2949 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.204:6443: connect: connection refused" logger="UnhandledError" Jan 23 17:58:55.717177 containerd[1996]: time="2026-01-23T17:58:55.716924117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-204,Uid:fd018a0d9ff695f9374bd8459f55a39c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b6fe58d8ad26a22d2e1184ab7b13028d2dc6943a86d977d77bcc76ce3a6e6d7\"" Jan 23 17:58:55.746635 containerd[1996]: time="2026-01-23T17:58:55.740448365Z" level=info msg="CreateContainer within sandbox \"6b6fe58d8ad26a22d2e1184ab7b13028d2dc6943a86d977d77bcc76ce3a6e6d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 17:58:55.770821 containerd[1996]: time="2026-01-23T17:58:55.770744405Z" level=info msg="Container 3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:55.777532 containerd[1996]: time="2026-01-23T17:58:55.777426101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-204,Uid:3bfefdbb5b498b73b7b4d036d57e8dc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"11d1b0d9286bbfb58550ada5f42454199ec4813640e151edfee83ad27ef9eff1\"" Jan 23 17:58:55.785210 containerd[1996]: time="2026-01-23T17:58:55.785160557Z" level=info msg="CreateContainer within sandbox \"11d1b0d9286bbfb58550ada5f42454199ec4813640e151edfee83ad27ef9eff1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 17:58:55.792898 containerd[1996]: time="2026-01-23T17:58:55.792826121Z" level=info msg="CreateContainer within sandbox \"6b6fe58d8ad26a22d2e1184ab7b13028d2dc6943a86d977d77bcc76ce3a6e6d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf\"" Jan 23 17:58:55.795588 containerd[1996]: time="2026-01-23T17:58:55.795542789Z" level=info msg="StartContainer for \"3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf\"" Jan 23 17:58:55.801863 containerd[1996]: time="2026-01-23T17:58:55.801780617Z" level=info msg="Container a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:55.803859 containerd[1996]: time="2026-01-23T17:58:55.803063261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-204,Uid:3f8846ac24a579e5d6c0333a0237c696,Namespace:kube-system,Attempt:0,} returns sandbox id \"968221c2cc316ac5bb723fcd1b56fe897741628c3c41488d6149fb5ba875b3ac\"" Jan 23 17:58:55.804767 containerd[1996]: time="2026-01-23T17:58:55.803181989Z" level=info msg="connecting to shim 3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf" address="unix:///run/containerd/s/5c26172bcec5046831c619207faee62226ac0b21ac35f4f99a3d6e9e00d567dc" protocol=ttrpc version=3 Jan 23 17:58:55.814469 containerd[1996]: time="2026-01-23T17:58:55.814254173Z" level=info msg="CreateContainer within sandbox \"968221c2cc316ac5bb723fcd1b56fe897741628c3c41488d6149fb5ba875b3ac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 17:58:55.821999 containerd[1996]: time="2026-01-23T17:58:55.821918693Z" level=info msg="CreateContainer within sandbox \"11d1b0d9286bbfb58550ada5f42454199ec4813640e151edfee83ad27ef9eff1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387\"" Jan 23 17:58:55.823091 containerd[1996]: time="2026-01-23T17:58:55.823009397Z" level=info msg="StartContainer for \"a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387\"" Jan 23 17:58:55.827050 containerd[1996]: time="2026-01-23T17:58:55.826997405Z" level=info msg="connecting to shim a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387" address="unix:///run/containerd/s/4fc8089bc03ca8188834c103e21cf41881060119824daf6d74831f42bacb9e89" protocol=ttrpc version=3 Jan 23 17:58:55.835061 containerd[1996]: time="2026-01-23T17:58:55.834929741Z" level=info msg="Container d9c49502e0668c5a2098857929410154ffc7350397508f3acf91960ae31d39b3: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:58:55.858092 containerd[1996]: time="2026-01-23T17:58:55.857757389Z" level=info msg="CreateContainer within sandbox \"968221c2cc316ac5bb723fcd1b56fe897741628c3c41488d6149fb5ba875b3ac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d9c49502e0668c5a2098857929410154ffc7350397508f3acf91960ae31d39b3\"" Jan 23 17:58:55.859052 containerd[1996]: time="2026-01-23T17:58:55.859006877Z" level=info msg="StartContainer for \"d9c49502e0668c5a2098857929410154ffc7350397508f3acf91960ae31d39b3\"" Jan 23 17:58:55.860259 systemd[1]: Started cri-containerd-3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf.scope - libcontainer container 3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf. Jan 23 17:58:55.863119 containerd[1996]: time="2026-01-23T17:58:55.862146497Z" level=info msg="connecting to shim d9c49502e0668c5a2098857929410154ffc7350397508f3acf91960ae31d39b3" address="unix:///run/containerd/s/8d420e145f4230f256927522358ced3fe2ee58ac8312aa2c73fbea2c4eea866c" protocol=ttrpc version=3 Jan 23 17:58:55.897997 systemd[1]: Started cri-containerd-a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387.scope - libcontainer container a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387. Jan 23 17:58:55.940389 systemd[1]: Started cri-containerd-d9c49502e0668c5a2098857929410154ffc7350397508f3acf91960ae31d39b3.scope - libcontainer container d9c49502e0668c5a2098857929410154ffc7350397508f3acf91960ae31d39b3. Jan 23 17:58:56.024695 containerd[1996]: time="2026-01-23T17:58:56.024641318Z" level=info msg="StartContainer for \"3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf\" returns successfully" Jan 23 17:58:56.099466 containerd[1996]: time="2026-01-23T17:58:56.099121527Z" level=info msg="StartContainer for \"d9c49502e0668c5a2098857929410154ffc7350397508f3acf91960ae31d39b3\" returns successfully" Jan 23 17:58:56.103430 containerd[1996]: time="2026-01-23T17:58:56.103337247Z" level=info msg="StartContainer for \"a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387\" returns successfully" Jan 23 17:58:56.268695 kubelet[2949]: E0123 17:58:56.268622 2949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-204?timeout=10s\": dial tcp 172.31.24.204:6443: connect: connection refused" interval="1.6s" Jan 23 17:58:56.500429 kubelet[2949]: I0123 17:58:56.500380 2949 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-204" Jan 23 17:58:56.965477 kubelet[2949]: E0123 17:58:56.965430 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:56.976096 kubelet[2949]: E0123 17:58:56.976046 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:56.981169 kubelet[2949]: E0123 17:58:56.981116 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:57.984353 kubelet[2949]: E0123 17:58:57.984280 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:57.986370 kubelet[2949]: E0123 17:58:57.986297 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:57.988202 kubelet[2949]: E0123 17:58:57.988082 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:58.986218 kubelet[2949]: E0123 17:58:58.986160 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:58.988057 kubelet[2949]: E0123 17:58:58.987997 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:58.988706 kubelet[2949]: E0123 17:58:58.988657 2949 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:58:59.827262 kubelet[2949]: I0123 17:58:59.827203 2949 apiserver.go:52] "Watching apiserver" Jan 23 17:58:59.860673 kubelet[2949]: I0123 17:58:59.860613 2949 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:58:59.886221 kubelet[2949]: E0123 17:58:59.886154 2949 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-204\" not found" node="ip-172-31-24-204" Jan 23 17:59:00.047668 kubelet[2949]: I0123 17:59:00.047574 2949 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-204" Jan 23 17:59:00.047668 kubelet[2949]: E0123 17:59:00.047662 2949 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-204\": node \"ip-172-31-24-204\" not found" Jan 23 17:59:00.060443 kubelet[2949]: I0123 17:59:00.060379 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-204" Jan 23 17:59:00.181122 kubelet[2949]: E0123 17:59:00.180954 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-204" Jan 23 17:59:00.181122 kubelet[2949]: I0123 17:59:00.181017 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:00.192337 kubelet[2949]: E0123 17:59:00.192272 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:00.192337 kubelet[2949]: I0123 17:59:00.192329 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:00.200998 kubelet[2949]: E0123 17:59:00.200927 2949 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:02.002357 systemd[1]: Reload requested from client PID 3223 ('systemctl') (unit session-7.scope)... Jan 23 17:59:02.002399 systemd[1]: Reloading... Jan 23 17:59:02.326720 zram_generator::config[3270]: No configuration found. Jan 23 17:59:02.800652 kubelet[2949]: I0123 17:59:02.798991 2949 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:02.900956 systemd[1]: Reloading finished in 897 ms. Jan 23 17:59:02.977840 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:59:02.997357 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 17:59:02.997988 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:59:02.998103 systemd[1]: kubelet.service: Consumed 2.058s CPU time, 126M memory peak. Jan 23 17:59:03.006755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 17:59:03.431819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 17:59:03.449364 (kubelet)[3327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 17:59:03.586536 kubelet[3327]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:59:03.586536 kubelet[3327]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 17:59:03.586536 kubelet[3327]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 17:59:03.586536 kubelet[3327]: I0123 17:59:03.585794 3327 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 17:59:03.614272 kubelet[3327]: I0123 17:59:03.614191 3327 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 17:59:03.614272 kubelet[3327]: I0123 17:59:03.614250 3327 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 17:59:03.617007 kubelet[3327]: I0123 17:59:03.616937 3327 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 17:59:03.619799 kubelet[3327]: I0123 17:59:03.619740 3327 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 17:59:03.629169 kubelet[3327]: I0123 17:59:03.629099 3327 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 17:59:03.653568 kubelet[3327]: I0123 17:59:03.651535 3327 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 17:59:03.659326 kubelet[3327]: I0123 17:59:03.659192 3327 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 17:59:03.661878 kubelet[3327]: I0123 17:59:03.659900 3327 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 17:59:03.661878 kubelet[3327]: I0123 17:59:03.659979 3327 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 17:59:03.661878 kubelet[3327]: I0123 17:59:03.660310 3327 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 17:59:03.661878 kubelet[3327]: I0123 17:59:03.660334 3327 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 17:59:03.662296 kubelet[3327]: I0123 17:59:03.660427 3327 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:59:03.662296 kubelet[3327]: I0123 17:59:03.660810 3327 kubelet.go:446] "Attempting to sync node with API server" Jan 23 17:59:03.662296 kubelet[3327]: I0123 17:59:03.661613 3327 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 17:59:03.662296 kubelet[3327]: I0123 17:59:03.661689 3327 kubelet.go:352] "Adding apiserver pod source" Jan 23 17:59:03.662296 kubelet[3327]: I0123 17:59:03.661714 3327 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 17:59:03.666043 kubelet[3327]: I0123 17:59:03.665987 3327 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 17:59:03.667762 kubelet[3327]: I0123 17:59:03.666786 3327 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 17:59:03.670626 kubelet[3327]: I0123 17:59:03.669650 3327 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 17:59:03.670626 kubelet[3327]: I0123 17:59:03.669733 3327 server.go:1287] "Started kubelet" Jan 23 17:59:03.678611 kubelet[3327]: I0123 17:59:03.678485 3327 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 17:59:03.699581 kubelet[3327]: I0123 17:59:03.698280 3327 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 17:59:03.704018 kubelet[3327]: I0123 17:59:03.703953 3327 server.go:479] "Adding debug handlers to kubelet server" Jan 23 17:59:03.708286 kubelet[3327]: I0123 17:59:03.706893 3327 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 17:59:03.708286 kubelet[3327]: I0123 17:59:03.707305 3327 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 17:59:03.708617 kubelet[3327]: I0123 17:59:03.708458 3327 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 17:59:03.717542 kubelet[3327]: I0123 17:59:03.716118 3327 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 17:59:03.718024 kubelet[3327]: E0123 17:59:03.717960 3327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-204\" not found" Jan 23 17:59:03.720532 kubelet[3327]: I0123 17:59:03.719276 3327 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 17:59:03.720532 kubelet[3327]: I0123 17:59:03.719562 3327 reconciler.go:26] "Reconciler: start to sync state" Jan 23 17:59:03.766118 kubelet[3327]: I0123 17:59:03.766062 3327 factory.go:221] Registration of the systemd container factory successfully Jan 23 17:59:03.767766 kubelet[3327]: I0123 17:59:03.767701 3327 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 17:59:03.783419 kubelet[3327]: I0123 17:59:03.783326 3327 factory.go:221] Registration of the containerd container factory successfully Jan 23 17:59:03.788062 kubelet[3327]: E0123 17:59:03.787996 3327 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 17:59:03.823087 kubelet[3327]: E0123 17:59:03.821808 3327 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-204\" not found" Jan 23 17:59:03.827135 kubelet[3327]: I0123 17:59:03.827027 3327 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 17:59:03.845788 kubelet[3327]: I0123 17:59:03.844898 3327 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 17:59:03.846004 kubelet[3327]: I0123 17:59:03.845976 3327 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 17:59:03.846137 kubelet[3327]: I0123 17:59:03.846116 3327 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 17:59:03.846620 kubelet[3327]: I0123 17:59:03.846589 3327 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 17:59:03.850676 kubelet[3327]: E0123 17:59:03.849315 3327 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 17:59:03.951973 kubelet[3327]: E0123 17:59:03.951583 3327 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 17:59:04.067303 kubelet[3327]: I0123 17:59:04.067244 3327 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 17:59:04.067303 kubelet[3327]: I0123 17:59:04.067288 3327 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 17:59:04.067490 kubelet[3327]: I0123 17:59:04.067325 3327 state_mem.go:36] "Initialized new in-memory state store" Jan 23 17:59:04.067916 kubelet[3327]: I0123 17:59:04.067755 3327 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 17:59:04.067916 kubelet[3327]: I0123 17:59:04.067793 3327 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 17:59:04.068720 kubelet[3327]: I0123 17:59:04.067833 3327 policy_none.go:49] "None policy: Start" Jan 23 17:59:04.068720 kubelet[3327]: I0123 17:59:04.068652 3327 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 17:59:04.068720 kubelet[3327]: I0123 17:59:04.068691 3327 state_mem.go:35] "Initializing new in-memory state store" Jan 23 17:59:04.069042 kubelet[3327]: I0123 17:59:04.068965 3327 state_mem.go:75] "Updated machine memory state" Jan 23 17:59:04.107990 kubelet[3327]: I0123 17:59:04.107928 3327 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 17:59:04.108301 kubelet[3327]: I0123 17:59:04.108259 3327 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 17:59:04.108420 kubelet[3327]: I0123 17:59:04.108299 3327 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 17:59:04.115851 kubelet[3327]: E0123 17:59:04.115785 3327 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 17:59:04.116000 kubelet[3327]: I0123 17:59:04.115871 3327 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 17:59:04.163193 kubelet[3327]: I0123 17:59:04.161934 3327 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:04.175982 kubelet[3327]: I0123 17:59:04.175902 3327 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:04.180477 kubelet[3327]: I0123 17:59:04.180322 3327 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-204" Jan 23 17:59:04.231131 kubelet[3327]: E0123 17:59:04.231078 3327 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-204\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:04.231860 kubelet[3327]: I0123 17:59:04.231813 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f8846ac24a579e5d6c0333a0237c696-ca-certs\") pod \"kube-apiserver-ip-172-31-24-204\" (UID: \"3f8846ac24a579e5d6c0333a0237c696\") " pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:04.232078 kubelet[3327]: I0123 17:59:04.232032 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f8846ac24a579e5d6c0333a0237c696-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-204\" (UID: \"3f8846ac24a579e5d6c0333a0237c696\") " pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:04.232349 kubelet[3327]: I0123 17:59:04.232278 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f8846ac24a579e5d6c0333a0237c696-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-204\" (UID: \"3f8846ac24a579e5d6c0333a0237c696\") " pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:04.235562 kubelet[3327]: I0123 17:59:04.233675 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:04.235562 kubelet[3327]: I0123 17:59:04.233782 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3bfefdbb5b498b73b7b4d036d57e8dc9-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-204\" (UID: \"3bfefdbb5b498b73b7b4d036d57e8dc9\") " pod="kube-system/kube-scheduler-ip-172-31-24-204" Jan 23 17:59:04.235562 kubelet[3327]: I0123 17:59:04.234021 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:04.235562 kubelet[3327]: I0123 17:59:04.234108 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:04.235562 kubelet[3327]: I0123 17:59:04.234180 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:04.235955 kubelet[3327]: I0123 17:59:04.234253 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd018a0d9ff695f9374bd8459f55a39c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-204\" (UID: \"fd018a0d9ff695f9374bd8459f55a39c\") " pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:04.288696 kubelet[3327]: I0123 17:59:04.288630 3327 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-204" Jan 23 17:59:04.330538 kubelet[3327]: I0123 17:59:04.330445 3327 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-204" Jan 23 17:59:04.330708 kubelet[3327]: I0123 17:59:04.330611 3327 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-204" Jan 23 17:59:04.665140 kubelet[3327]: I0123 17:59:04.664984 3327 apiserver.go:52] "Watching apiserver" Jan 23 17:59:04.719698 kubelet[3327]: I0123 17:59:04.719613 3327 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 17:59:04.820622 kubelet[3327]: I0123 17:59:04.820462 3327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-204" podStartSLOduration=0.820439018 podStartE2EDuration="820.439018ms" podCreationTimestamp="2026-01-23 17:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:04.796149722 +0000 UTC m=+1.333536896" watchObservedRunningTime="2026-01-23 17:59:04.820439018 +0000 UTC m=+1.357826192" Jan 23 17:59:04.846834 kubelet[3327]: I0123 17:59:04.846738 3327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-204" podStartSLOduration=2.846718334 podStartE2EDuration="2.846718334s" podCreationTimestamp="2026-01-23 17:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:04.84642743 +0000 UTC m=+1.383814616" watchObservedRunningTime="2026-01-23 17:59:04.846718334 +0000 UTC m=+1.384105508" Jan 23 17:59:04.847060 kubelet[3327]: I0123 17:59:04.846900 3327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-204" podStartSLOduration=0.846889862 podStartE2EDuration="846.889862ms" podCreationTimestamp="2026-01-23 17:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:04.824592194 +0000 UTC m=+1.361979404" watchObservedRunningTime="2026-01-23 17:59:04.846889862 +0000 UTC m=+1.384277060" Jan 23 17:59:04.954580 kubelet[3327]: I0123 17:59:04.954403 3327 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:04.957061 kubelet[3327]: I0123 17:59:04.957005 3327 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:04.983069 kubelet[3327]: E0123 17:59:04.982993 3327 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-204\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-204" Jan 23 17:59:04.986460 kubelet[3327]: E0123 17:59:04.986330 3327 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-204\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-204" Jan 23 17:59:05.518534 update_engine[1981]: I20260123 17:59:05.516559 1981 update_attempter.cc:509] Updating boot flags... Jan 23 17:59:08.545120 kubelet[3327]: I0123 17:59:08.545061 3327 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 17:59:08.546362 containerd[1996]: time="2026-01-23T17:59:08.546106720Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 17:59:08.547934 kubelet[3327]: I0123 17:59:08.546763 3327 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 17:59:09.213698 systemd[1]: Created slice kubepods-besteffort-podf9f42e3a_004f_494d_8104_4f4b5c443683.slice - libcontainer container kubepods-besteffort-podf9f42e3a_004f_494d_8104_4f4b5c443683.slice. Jan 23 17:59:09.273373 kubelet[3327]: I0123 17:59:09.273321 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9f42e3a-004f-494d-8104-4f4b5c443683-xtables-lock\") pod \"kube-proxy-smppt\" (UID: \"f9f42e3a-004f-494d-8104-4f4b5c443683\") " pod="kube-system/kube-proxy-smppt" Jan 23 17:59:09.273722 kubelet[3327]: I0123 17:59:09.273675 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9f42e3a-004f-494d-8104-4f4b5c443683-lib-modules\") pod \"kube-proxy-smppt\" (UID: \"f9f42e3a-004f-494d-8104-4f4b5c443683\") " pod="kube-system/kube-proxy-smppt" Jan 23 17:59:09.274108 kubelet[3327]: I0123 17:59:09.273919 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5nqd\" (UniqueName: \"kubernetes.io/projected/f9f42e3a-004f-494d-8104-4f4b5c443683-kube-api-access-f5nqd\") pod \"kube-proxy-smppt\" (UID: \"f9f42e3a-004f-494d-8104-4f4b5c443683\") " pod="kube-system/kube-proxy-smppt" Jan 23 17:59:09.274372 kubelet[3327]: I0123 17:59:09.274317 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9f42e3a-004f-494d-8104-4f4b5c443683-kube-proxy\") pod \"kube-proxy-smppt\" (UID: \"f9f42e3a-004f-494d-8104-4f4b5c443683\") " pod="kube-system/kube-proxy-smppt" Jan 23 17:59:09.393569 kubelet[3327]: E0123 17:59:09.393286 3327 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 17:59:09.393569 kubelet[3327]: E0123 17:59:09.393363 3327 projected.go:194] Error preparing data for projected volume kube-api-access-f5nqd for pod kube-system/kube-proxy-smppt: configmap "kube-root-ca.crt" not found Jan 23 17:59:09.393569 kubelet[3327]: E0123 17:59:09.393479 3327 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f9f42e3a-004f-494d-8104-4f4b5c443683-kube-api-access-f5nqd podName:f9f42e3a-004f-494d-8104-4f4b5c443683 nodeName:}" failed. No retries permitted until 2026-01-23 17:59:09.893443489 +0000 UTC m=+6.430830651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f5nqd" (UniqueName: "kubernetes.io/projected/f9f42e3a-004f-494d-8104-4f4b5c443683-kube-api-access-f5nqd") pod "kube-proxy-smppt" (UID: "f9f42e3a-004f-494d-8104-4f4b5c443683") : configmap "kube-root-ca.crt" not found Jan 23 17:59:09.704293 systemd[1]: Created slice kubepods-besteffort-podce7b547a_93ef_4717_9473_29c7037baa32.slice - libcontainer container kubepods-besteffort-podce7b547a_93ef_4717_9473_29c7037baa32.slice. Jan 23 17:59:09.779577 kubelet[3327]: I0123 17:59:09.779479 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwtt\" (UniqueName: \"kubernetes.io/projected/ce7b547a-93ef-4717-9473-29c7037baa32-kube-api-access-xmwtt\") pod \"tigera-operator-7dcd859c48-gv5kz\" (UID: \"ce7b547a-93ef-4717-9473-29c7037baa32\") " pod="tigera-operator/tigera-operator-7dcd859c48-gv5kz" Jan 23 17:59:09.780352 kubelet[3327]: I0123 17:59:09.780244 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce7b547a-93ef-4717-9473-29c7037baa32-var-lib-calico\") pod \"tigera-operator-7dcd859c48-gv5kz\" (UID: \"ce7b547a-93ef-4717-9473-29c7037baa32\") " pod="tigera-operator/tigera-operator-7dcd859c48-gv5kz" Jan 23 17:59:10.018740 containerd[1996]: time="2026-01-23T17:59:10.018693304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gv5kz,Uid:ce7b547a-93ef-4717-9473-29c7037baa32,Namespace:tigera-operator,Attempt:0,}" Jan 23 17:59:10.065322 containerd[1996]: time="2026-01-23T17:59:10.065245900Z" level=info msg="connecting to shim 4a4b34df9c3066dcbf5bd50fed758c0c0ae970246e981c38a3d290c8470d1827" address="unix:///run/containerd/s/01f8bb1d109acca590c471b1e2af41551e84b764814921a207dd1b8e2bed6865" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:10.114837 systemd[1]: Started cri-containerd-4a4b34df9c3066dcbf5bd50fed758c0c0ae970246e981c38a3d290c8470d1827.scope - libcontainer container 4a4b34df9c3066dcbf5bd50fed758c0c0ae970246e981c38a3d290c8470d1827. Jan 23 17:59:10.127359 containerd[1996]: time="2026-01-23T17:59:10.127310440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smppt,Uid:f9f42e3a-004f-494d-8104-4f4b5c443683,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:10.172124 containerd[1996]: time="2026-01-23T17:59:10.172050677Z" level=info msg="connecting to shim ea4b4f143354497a0075facf56c8ac92a3fd5a9ac91cf93a48437cd949f5abe9" address="unix:///run/containerd/s/7b7644e45fddc64ba5fa6377cf1a60614d79b89d306466cc440bd501f00a980c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:10.211071 containerd[1996]: time="2026-01-23T17:59:10.210986405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gv5kz,Uid:ce7b547a-93ef-4717-9473-29c7037baa32,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4a4b34df9c3066dcbf5bd50fed758c0c0ae970246e981c38a3d290c8470d1827\"" Jan 23 17:59:10.214729 containerd[1996]: time="2026-01-23T17:59:10.214664993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 17:59:10.246887 systemd[1]: Started cri-containerd-ea4b4f143354497a0075facf56c8ac92a3fd5a9ac91cf93a48437cd949f5abe9.scope - libcontainer container ea4b4f143354497a0075facf56c8ac92a3fd5a9ac91cf93a48437cd949f5abe9. Jan 23 17:59:10.297190 containerd[1996]: time="2026-01-23T17:59:10.296304737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-smppt,Uid:f9f42e3a-004f-494d-8104-4f4b5c443683,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea4b4f143354497a0075facf56c8ac92a3fd5a9ac91cf93a48437cd949f5abe9\"" Jan 23 17:59:10.303913 containerd[1996]: time="2026-01-23T17:59:10.303856589Z" level=info msg="CreateContainer within sandbox \"ea4b4f143354497a0075facf56c8ac92a3fd5a9ac91cf93a48437cd949f5abe9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 17:59:10.320529 containerd[1996]: time="2026-01-23T17:59:10.320323325Z" level=info msg="Container f1ff5ff972da48a73a3adea039bc369511b505df9ae7a436dc62055f5ebecddd: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:10.338661 containerd[1996]: time="2026-01-23T17:59:10.338585777Z" level=info msg="CreateContainer within sandbox \"ea4b4f143354497a0075facf56c8ac92a3fd5a9ac91cf93a48437cd949f5abe9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f1ff5ff972da48a73a3adea039bc369511b505df9ae7a436dc62055f5ebecddd\"" Jan 23 17:59:10.344087 containerd[1996]: time="2026-01-23T17:59:10.342522413Z" level=info msg="StartContainer for \"f1ff5ff972da48a73a3adea039bc369511b505df9ae7a436dc62055f5ebecddd\"" Jan 23 17:59:10.348152 containerd[1996]: time="2026-01-23T17:59:10.348085853Z" level=info msg="connecting to shim f1ff5ff972da48a73a3adea039bc369511b505df9ae7a436dc62055f5ebecddd" address="unix:///run/containerd/s/7b7644e45fddc64ba5fa6377cf1a60614d79b89d306466cc440bd501f00a980c" protocol=ttrpc version=3 Jan 23 17:59:10.388998 systemd[1]: Started cri-containerd-f1ff5ff972da48a73a3adea039bc369511b505df9ae7a436dc62055f5ebecddd.scope - libcontainer container f1ff5ff972da48a73a3adea039bc369511b505df9ae7a436dc62055f5ebecddd. Jan 23 17:59:10.520544 containerd[1996]: time="2026-01-23T17:59:10.520349166Z" level=info msg="StartContainer for \"f1ff5ff972da48a73a3adea039bc369511b505df9ae7a436dc62055f5ebecddd\" returns successfully" Jan 23 17:59:11.267124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886514009.mount: Deactivated successfully. Jan 23 17:59:11.365288 kubelet[3327]: I0123 17:59:11.365129 3327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-smppt" podStartSLOduration=2.36510303 podStartE2EDuration="2.36510303s" podCreationTimestamp="2026-01-23 17:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 17:59:11.007008305 +0000 UTC m=+7.544395563" watchObservedRunningTime="2026-01-23 17:59:11.36510303 +0000 UTC m=+7.902490216" Jan 23 17:59:12.560866 containerd[1996]: time="2026-01-23T17:59:12.560782664Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:12.563278 containerd[1996]: time="2026-01-23T17:59:12.562846556Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 17:59:12.564414 containerd[1996]: time="2026-01-23T17:59:12.564344972Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:12.570484 containerd[1996]: time="2026-01-23T17:59:12.570389564Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:12.574548 containerd[1996]: time="2026-01-23T17:59:12.574445060Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.359716983s" Jan 23 17:59:12.574800 containerd[1996]: time="2026-01-23T17:59:12.574761068Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 17:59:12.583098 containerd[1996]: time="2026-01-23T17:59:12.582904089Z" level=info msg="CreateContainer within sandbox \"4a4b34df9c3066dcbf5bd50fed758c0c0ae970246e981c38a3d290c8470d1827\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 17:59:12.600090 containerd[1996]: time="2026-01-23T17:59:12.599833125Z" level=info msg="Container 85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:12.619793 containerd[1996]: time="2026-01-23T17:59:12.619595157Z" level=info msg="CreateContainer within sandbox \"4a4b34df9c3066dcbf5bd50fed758c0c0ae970246e981c38a3d290c8470d1827\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8\"" Jan 23 17:59:12.621020 containerd[1996]: time="2026-01-23T17:59:12.620920005Z" level=info msg="StartContainer for \"85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8\"" Jan 23 17:59:12.624295 containerd[1996]: time="2026-01-23T17:59:12.623981229Z" level=info msg="connecting to shim 85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8" address="unix:///run/containerd/s/01f8bb1d109acca590c471b1e2af41551e84b764814921a207dd1b8e2bed6865" protocol=ttrpc version=3 Jan 23 17:59:12.671834 systemd[1]: Started cri-containerd-85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8.scope - libcontainer container 85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8. Jan 23 17:59:12.752161 containerd[1996]: time="2026-01-23T17:59:12.752095353Z" level=info msg="StartContainer for \"85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8\" returns successfully" Jan 23 17:59:19.718048 sudo[2364]: pam_unix(sudo:session): session closed for user root Jan 23 17:59:19.798765 sshd[2363]: Connection closed by 68.220.241.50 port 58118 Jan 23 17:59:19.801421 sshd-session[2360]: pam_unix(sshd:session): session closed for user core Jan 23 17:59:19.810382 systemd[1]: sshd@6-172.31.24.204:22-68.220.241.50:58118.service: Deactivated successfully. Jan 23 17:59:19.820994 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 17:59:19.822082 systemd[1]: session-7.scope: Consumed 10.610s CPU time, 222.6M memory peak. Jan 23 17:59:19.827233 systemd-logind[1978]: Session 7 logged out. Waiting for processes to exit. Jan 23 17:59:19.833490 systemd-logind[1978]: Removed session 7. Jan 23 17:59:37.693536 kubelet[3327]: I0123 17:59:37.693206 3327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-gv5kz" podStartSLOduration=26.328592262 podStartE2EDuration="28.693182997s" podCreationTimestamp="2026-01-23 17:59:09 +0000 UTC" firstStartedPulling="2026-01-23 17:59:10.214014377 +0000 UTC m=+6.751401551" lastFinishedPulling="2026-01-23 17:59:12.578605112 +0000 UTC m=+9.115992286" observedRunningTime="2026-01-23 17:59:13.026648515 +0000 UTC m=+9.564035761" watchObservedRunningTime="2026-01-23 17:59:37.693182997 +0000 UTC m=+34.230570159" Jan 23 17:59:37.718726 systemd[1]: Created slice kubepods-besteffort-podf9169b38_cf37_4b06_9e75_a4d91b502f72.slice - libcontainer container kubepods-besteffort-podf9169b38_cf37_4b06_9e75_a4d91b502f72.slice. Jan 23 17:59:37.784027 kubelet[3327]: I0123 17:59:37.783940 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9169b38-cf37-4b06-9e75-a4d91b502f72-tigera-ca-bundle\") pod \"calico-typha-7f6d487bbc-lqkd7\" (UID: \"f9169b38-cf37-4b06-9e75-a4d91b502f72\") " pod="calico-system/calico-typha-7f6d487bbc-lqkd7" Jan 23 17:59:37.784198 kubelet[3327]: I0123 17:59:37.784171 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9169b38-cf37-4b06-9e75-a4d91b502f72-typha-certs\") pod \"calico-typha-7f6d487bbc-lqkd7\" (UID: \"f9169b38-cf37-4b06-9e75-a4d91b502f72\") " pod="calico-system/calico-typha-7f6d487bbc-lqkd7" Jan 23 17:59:37.784944 kubelet[3327]: I0123 17:59:37.784300 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps2s6\" (UniqueName: \"kubernetes.io/projected/f9169b38-cf37-4b06-9e75-a4d91b502f72-kube-api-access-ps2s6\") pod \"calico-typha-7f6d487bbc-lqkd7\" (UID: \"f9169b38-cf37-4b06-9e75-a4d91b502f72\") " pod="calico-system/calico-typha-7f6d487bbc-lqkd7" Jan 23 17:59:37.929194 systemd[1]: Created slice kubepods-besteffort-podfdb70a8e_b9fa_42bb_a6f7_b8f29b6ead2e.slice - libcontainer container kubepods-besteffort-podfdb70a8e_b9fa_42bb_a6f7_b8f29b6ead2e.slice. Jan 23 17:59:37.987441 kubelet[3327]: I0123 17:59:37.986961 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-lib-modules\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.987441 kubelet[3327]: I0123 17:59:37.987034 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-cni-bin-dir\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.987441 kubelet[3327]: I0123 17:59:37.987081 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-flexvol-driver-host\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.987441 kubelet[3327]: I0123 17:59:37.987118 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-policysync\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.987441 kubelet[3327]: I0123 17:59:37.987154 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-tigera-ca-bundle\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.988282 kubelet[3327]: I0123 17:59:37.988077 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-node-certs\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.988461 kubelet[3327]: I0123 17:59:37.988406 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6jwz\" (UniqueName: \"kubernetes.io/projected/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-kube-api-access-l6jwz\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.988762 kubelet[3327]: I0123 17:59:37.988680 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-var-lib-calico\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.988874 kubelet[3327]: I0123 17:59:37.988788 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-xtables-lock\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.988975 kubelet[3327]: I0123 17:59:37.988873 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-cni-log-dir\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.989147 kubelet[3327]: I0123 17:59:37.989053 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-var-run-calico\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:37.989490 kubelet[3327]: I0123 17:59:37.989208 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e-cni-net-dir\") pod \"calico-node-zv2ff\" (UID: \"fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e\") " pod="calico-system/calico-node-zv2ff" Jan 23 17:59:38.035997 kubelet[3327]: E0123 17:59:38.035768 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 17:59:38.044546 containerd[1996]: time="2026-01-23T17:59:38.044031811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f6d487bbc-lqkd7,Uid:f9169b38-cf37-4b06-9e75-a4d91b502f72,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:38.091294 kubelet[3327]: I0123 17:59:38.090397 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbhk\" (UniqueName: \"kubernetes.io/projected/bd861cd6-0ac7-4fc8-b917-14516a6e2c66-kube-api-access-mhbhk\") pod \"csi-node-driver-wzd8d\" (UID: \"bd861cd6-0ac7-4fc8-b917-14516a6e2c66\") " pod="calico-system/csi-node-driver-wzd8d" Jan 23 17:59:38.091294 kubelet[3327]: I0123 17:59:38.090488 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bd861cd6-0ac7-4fc8-b917-14516a6e2c66-socket-dir\") pod \"csi-node-driver-wzd8d\" (UID: \"bd861cd6-0ac7-4fc8-b917-14516a6e2c66\") " pod="calico-system/csi-node-driver-wzd8d" Jan 23 17:59:38.091294 kubelet[3327]: I0123 17:59:38.091123 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bd861cd6-0ac7-4fc8-b917-14516a6e2c66-registration-dir\") pod \"csi-node-driver-wzd8d\" (UID: \"bd861cd6-0ac7-4fc8-b917-14516a6e2c66\") " pod="calico-system/csi-node-driver-wzd8d" Jan 23 17:59:38.094452 kubelet[3327]: I0123 17:59:38.091601 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bd861cd6-0ac7-4fc8-b917-14516a6e2c66-varrun\") pod \"csi-node-driver-wzd8d\" (UID: \"bd861cd6-0ac7-4fc8-b917-14516a6e2c66\") " pod="calico-system/csi-node-driver-wzd8d" Jan 23 17:59:38.094452 kubelet[3327]: I0123 17:59:38.091895 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd861cd6-0ac7-4fc8-b917-14516a6e2c66-kubelet-dir\") pod \"csi-node-driver-wzd8d\" (UID: \"bd861cd6-0ac7-4fc8-b917-14516a6e2c66\") " pod="calico-system/csi-node-driver-wzd8d" Jan 23 17:59:38.099751 kubelet[3327]: E0123 17:59:38.099705 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.109679 kubelet[3327]: W0123 17:59:38.107730 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.109679 kubelet[3327]: E0123 17:59:38.107838 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.110555 kubelet[3327]: E0123 17:59:38.109955 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.112582 kubelet[3327]: W0123 17:59:38.110794 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.112582 kubelet[3327]: E0123 17:59:38.110879 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.130167 containerd[1996]: time="2026-01-23T17:59:38.130092247Z" level=info msg="connecting to shim aeac2141607b0164711ff6015faba2d298c9b5eeedfa4d6c3084149c419fe658" address="unix:///run/containerd/s/9112bd20fe5039b22298657928584aa24d364908d8c4b7d46b464f534b8f1935" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:38.149523 kubelet[3327]: E0123 17:59:38.149067 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.150388 kubelet[3327]: W0123 17:59:38.150166 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.150758 kubelet[3327]: E0123 17:59:38.150694 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.194101 kubelet[3327]: E0123 17:59:38.193966 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.195365 kubelet[3327]: W0123 17:59:38.194829 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.195365 kubelet[3327]: E0123 17:59:38.194884 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.197481 kubelet[3327]: E0123 17:59:38.197288 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.198743 kubelet[3327]: W0123 17:59:38.197919 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.199427 kubelet[3327]: E0123 17:59:38.198482 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.203232 kubelet[3327]: E0123 17:59:38.202717 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.203232 kubelet[3327]: W0123 17:59:38.202756 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.203232 kubelet[3327]: E0123 17:59:38.202828 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.204393 kubelet[3327]: E0123 17:59:38.203922 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.204393 kubelet[3327]: W0123 17:59:38.203974 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.204393 kubelet[3327]: E0123 17:59:38.204133 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.205210 kubelet[3327]: E0123 17:59:38.205173 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.205475 kubelet[3327]: W0123 17:59:38.205444 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.205748 kubelet[3327]: E0123 17:59:38.205706 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.207591 kubelet[3327]: E0123 17:59:38.207228 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.207591 kubelet[3327]: W0123 17:59:38.207367 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.207591 kubelet[3327]: E0123 17:59:38.207440 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.208842 kubelet[3327]: E0123 17:59:38.208759 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.209276 kubelet[3327]: W0123 17:59:38.208984 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.209276 kubelet[3327]: E0123 17:59:38.209056 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.210733 kubelet[3327]: E0123 17:59:38.210482 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.210733 kubelet[3327]: W0123 17:59:38.210660 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.211326 kubelet[3327]: E0123 17:59:38.210734 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.213063 kubelet[3327]: E0123 17:59:38.212848 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.213063 kubelet[3327]: W0123 17:59:38.213002 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.213443 kubelet[3327]: E0123 17:59:38.213204 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.213780 kubelet[3327]: E0123 17:59:38.213742 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.214067 kubelet[3327]: W0123 17:59:38.213886 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.214067 kubelet[3327]: E0123 17:59:38.213975 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.216103 kubelet[3327]: E0123 17:59:38.216037 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.216103 kubelet[3327]: W0123 17:59:38.216076 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.216103 kubelet[3327]: E0123 17:59:38.216147 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.217800 kubelet[3327]: E0123 17:59:38.216538 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.217800 kubelet[3327]: W0123 17:59:38.216560 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.217800 kubelet[3327]: E0123 17:59:38.217593 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.217800 kubelet[3327]: W0123 17:59:38.217665 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.217800 kubelet[3327]: E0123 17:59:38.217559 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.217800 kubelet[3327]: E0123 17:59:38.217736 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.218422 kubelet[3327]: E0123 17:59:38.218348 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.218485 kubelet[3327]: W0123 17:59:38.218421 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.219725 kubelet[3327]: E0123 17:59:38.219555 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.221564 kubelet[3327]: E0123 17:59:38.221178 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.221564 kubelet[3327]: W0123 17:59:38.221216 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.221564 kubelet[3327]: E0123 17:59:38.221454 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.224323 kubelet[3327]: E0123 17:59:38.224064 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.224323 kubelet[3327]: W0123 17:59:38.224133 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.224323 kubelet[3327]: E0123 17:59:38.224256 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.225847 kubelet[3327]: E0123 17:59:38.225164 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.225847 kubelet[3327]: W0123 17:59:38.225206 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.225847 kubelet[3327]: E0123 17:59:38.225437 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.227192 kubelet[3327]: E0123 17:59:38.226656 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.227656 kubelet[3327]: W0123 17:59:38.227190 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.228201 kubelet[3327]: E0123 17:59:38.227857 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.231786 kubelet[3327]: E0123 17:59:38.231726 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.231786 kubelet[3327]: W0123 17:59:38.231765 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.233743 kubelet[3327]: E0123 17:59:38.233684 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.239338 kubelet[3327]: E0123 17:59:38.238551 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.239338 kubelet[3327]: W0123 17:59:38.238594 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.241345 kubelet[3327]: E0123 17:59:38.239748 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.241345 kubelet[3327]: E0123 17:59:38.239839 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.241345 kubelet[3327]: W0123 17:59:38.239861 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.241345 kubelet[3327]: E0123 17:59:38.240035 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.241345 kubelet[3327]: E0123 17:59:38.241052 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.241345 kubelet[3327]: W0123 17:59:38.241299 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.241692 kubelet[3327]: E0123 17:59:38.241399 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.242227 systemd[1]: Started cri-containerd-aeac2141607b0164711ff6015faba2d298c9b5eeedfa4d6c3084149c419fe658.scope - libcontainer container aeac2141607b0164711ff6015faba2d298c9b5eeedfa4d6c3084149c419fe658. Jan 23 17:59:38.247190 containerd[1996]: time="2026-01-23T17:59:38.245948456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zv2ff,Uid:fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:38.247440 kubelet[3327]: E0123 17:59:38.246578 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.247440 kubelet[3327]: W0123 17:59:38.246607 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.247440 kubelet[3327]: E0123 17:59:38.246991 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.247440 kubelet[3327]: W0123 17:59:38.247026 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.247440 kubelet[3327]: E0123 17:59:38.247054 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.247440 kubelet[3327]: E0123 17:59:38.247098 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.250466 kubelet[3327]: E0123 17:59:38.247807 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.250466 kubelet[3327]: W0123 17:59:38.247864 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.250466 kubelet[3327]: E0123 17:59:38.247897 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.293855 kubelet[3327]: E0123 17:59:38.293705 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:38.294821 kubelet[3327]: W0123 17:59:38.293885 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:38.294821 kubelet[3327]: E0123 17:59:38.293965 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:38.304771 containerd[1996]: time="2026-01-23T17:59:38.304679276Z" level=info msg="connecting to shim 5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7" address="unix:///run/containerd/s/48ab08cb5a40e28b3d1fad90a74a70aee03389af59b920d564abb358f317739b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:38.375209 systemd[1]: Started cri-containerd-5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7.scope - libcontainer container 5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7. Jan 23 17:59:38.385404 containerd[1996]: time="2026-01-23T17:59:38.385262865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f6d487bbc-lqkd7,Uid:f9169b38-cf37-4b06-9e75-a4d91b502f72,Namespace:calico-system,Attempt:0,} returns sandbox id \"aeac2141607b0164711ff6015faba2d298c9b5eeedfa4d6c3084149c419fe658\"" Jan 23 17:59:38.390226 containerd[1996]: time="2026-01-23T17:59:38.390160677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 17:59:38.451160 containerd[1996]: time="2026-01-23T17:59:38.451110585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zv2ff,Uid:fdb70a8e-b9fa-42bb-a6f7-b8f29b6ead2e,Namespace:calico-system,Attempt:0,} returns sandbox id \"5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7\"" Jan 23 17:59:39.645901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836019592.mount: Deactivated successfully. Jan 23 17:59:39.850489 kubelet[3327]: E0123 17:59:39.850406 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 17:59:40.470727 containerd[1996]: time="2026-01-23T17:59:40.470641991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:40.473333 containerd[1996]: time="2026-01-23T17:59:40.473245259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 17:59:40.474291 containerd[1996]: time="2026-01-23T17:59:40.474209663Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:40.479671 containerd[1996]: time="2026-01-23T17:59:40.479574911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:40.481168 containerd[1996]: time="2026-01-23T17:59:40.480696107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.090472706s" Jan 23 17:59:40.481168 containerd[1996]: time="2026-01-23T17:59:40.480750395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 17:59:40.483703 containerd[1996]: time="2026-01-23T17:59:40.483644651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 17:59:40.514527 containerd[1996]: time="2026-01-23T17:59:40.513701879Z" level=info msg="CreateContainer within sandbox \"aeac2141607b0164711ff6015faba2d298c9b5eeedfa4d6c3084149c419fe658\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 17:59:40.528478 containerd[1996]: time="2026-01-23T17:59:40.526923947Z" level=info msg="Container a9612d4f2a750ec4eecd12476e4163bb10c41394b91fd042498b731d9c2e93dc: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:40.537724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641998361.mount: Deactivated successfully. Jan 23 17:59:40.543412 containerd[1996]: time="2026-01-23T17:59:40.543342251Z" level=info msg="CreateContainer within sandbox \"aeac2141607b0164711ff6015faba2d298c9b5eeedfa4d6c3084149c419fe658\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a9612d4f2a750ec4eecd12476e4163bb10c41394b91fd042498b731d9c2e93dc\"" Jan 23 17:59:40.545998 containerd[1996]: time="2026-01-23T17:59:40.545827595Z" level=info msg="StartContainer for \"a9612d4f2a750ec4eecd12476e4163bb10c41394b91fd042498b731d9c2e93dc\"" Jan 23 17:59:40.557835 containerd[1996]: time="2026-01-23T17:59:40.557708231Z" level=info msg="connecting to shim a9612d4f2a750ec4eecd12476e4163bb10c41394b91fd042498b731d9c2e93dc" address="unix:///run/containerd/s/9112bd20fe5039b22298657928584aa24d364908d8c4b7d46b464f534b8f1935" protocol=ttrpc version=3 Jan 23 17:59:40.598832 systemd[1]: Started cri-containerd-a9612d4f2a750ec4eecd12476e4163bb10c41394b91fd042498b731d9c2e93dc.scope - libcontainer container a9612d4f2a750ec4eecd12476e4163bb10c41394b91fd042498b731d9c2e93dc. Jan 23 17:59:40.682333 containerd[1996]: time="2026-01-23T17:59:40.682011180Z" level=info msg="StartContainer for \"a9612d4f2a750ec4eecd12476e4163bb10c41394b91fd042498b731d9c2e93dc\" returns successfully" Jan 23 17:59:41.162662 kubelet[3327]: E0123 17:59:41.162592 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.162662 kubelet[3327]: W0123 17:59:41.162651 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.164232 kubelet[3327]: E0123 17:59:41.162685 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.165052 kubelet[3327]: E0123 17:59:41.164982 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.165189 kubelet[3327]: W0123 17:59:41.165043 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.165189 kubelet[3327]: E0123 17:59:41.165154 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.165568 kubelet[3327]: E0123 17:59:41.165496 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.165568 kubelet[3327]: W0123 17:59:41.165556 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.165778 kubelet[3327]: E0123 17:59:41.165608 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.165956 kubelet[3327]: E0123 17:59:41.165922 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.166016 kubelet[3327]: W0123 17:59:41.165950 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.166016 kubelet[3327]: E0123 17:59:41.165991 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.167643 kubelet[3327]: E0123 17:59:41.166279 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.167643 kubelet[3327]: W0123 17:59:41.166308 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.167643 kubelet[3327]: E0123 17:59:41.166331 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.167643 kubelet[3327]: E0123 17:59:41.166659 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.167643 kubelet[3327]: W0123 17:59:41.166680 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.167643 kubelet[3327]: E0123 17:59:41.166705 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.167643 kubelet[3327]: E0123 17:59:41.167004 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.167643 kubelet[3327]: W0123 17:59:41.167022 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.167643 kubelet[3327]: E0123 17:59:41.167043 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.167643 kubelet[3327]: E0123 17:59:41.167404 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.168330 kubelet[3327]: W0123 17:59:41.167425 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.168330 kubelet[3327]: E0123 17:59:41.167449 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.168330 kubelet[3327]: E0123 17:59:41.167820 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.168330 kubelet[3327]: W0123 17:59:41.167840 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.168330 kubelet[3327]: E0123 17:59:41.167861 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.168330 kubelet[3327]: E0123 17:59:41.168128 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.168330 kubelet[3327]: W0123 17:59:41.168144 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.168330 kubelet[3327]: E0123 17:59:41.168163 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.169784 kubelet[3327]: E0123 17:59:41.168418 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.169784 kubelet[3327]: W0123 17:59:41.168433 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.169784 kubelet[3327]: E0123 17:59:41.168451 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.169784 kubelet[3327]: E0123 17:59:41.168746 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.169784 kubelet[3327]: W0123 17:59:41.168762 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.169784 kubelet[3327]: E0123 17:59:41.168781 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.169784 kubelet[3327]: E0123 17:59:41.169033 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.169784 kubelet[3327]: W0123 17:59:41.169049 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.169784 kubelet[3327]: E0123 17:59:41.169067 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.169784 kubelet[3327]: E0123 17:59:41.169311 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.170263 kubelet[3327]: W0123 17:59:41.169327 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.170263 kubelet[3327]: E0123 17:59:41.169345 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.170263 kubelet[3327]: E0123 17:59:41.169646 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.170263 kubelet[3327]: W0123 17:59:41.169665 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.170263 kubelet[3327]: E0123 17:59:41.169686 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.175610 kubelet[3327]: I0123 17:59:41.175196 3327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f6d487bbc-lqkd7" podStartSLOduration=2.082134405 podStartE2EDuration="4.175173587s" podCreationTimestamp="2026-01-23 17:59:37 +0000 UTC" firstStartedPulling="2026-01-23 17:59:38.389362665 +0000 UTC m=+34.926749827" lastFinishedPulling="2026-01-23 17:59:40.482401835 +0000 UTC m=+37.019789009" observedRunningTime="2026-01-23 17:59:41.135437662 +0000 UTC m=+37.672824944" watchObservedRunningTime="2026-01-23 17:59:41.175173587 +0000 UTC m=+37.712560761" Jan 23 17:59:41.237469 kubelet[3327]: E0123 17:59:41.237425 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.237469 kubelet[3327]: W0123 17:59:41.237461 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.237902 kubelet[3327]: E0123 17:59:41.237493 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.238131 kubelet[3327]: E0123 17:59:41.238099 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.238204 kubelet[3327]: W0123 17:59:41.238130 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.238204 kubelet[3327]: E0123 17:59:41.238171 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.238628 kubelet[3327]: E0123 17:59:41.238600 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.238722 kubelet[3327]: W0123 17:59:41.238627 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.238722 kubelet[3327]: E0123 17:59:41.238661 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.239055 kubelet[3327]: E0123 17:59:41.239027 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.239133 kubelet[3327]: W0123 17:59:41.239053 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.239133 kubelet[3327]: E0123 17:59:41.239089 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.239419 kubelet[3327]: E0123 17:59:41.239398 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.239419 kubelet[3327]: W0123 17:59:41.239414 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.239739 kubelet[3327]: E0123 17:59:41.239463 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.239921 kubelet[3327]: E0123 17:59:41.239893 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.239990 kubelet[3327]: W0123 17:59:41.239919 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.240130 kubelet[3327]: E0123 17:59:41.240026 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.240366 kubelet[3327]: E0123 17:59:41.240339 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.240540 kubelet[3327]: W0123 17:59:41.240363 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.240540 kubelet[3327]: E0123 17:59:41.240465 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.240866 kubelet[3327]: E0123 17:59:41.240838 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.240942 kubelet[3327]: W0123 17:59:41.240864 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.240942 kubelet[3327]: E0123 17:59:41.240902 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.241275 kubelet[3327]: E0123 17:59:41.241249 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.241434 kubelet[3327]: W0123 17:59:41.241274 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.241434 kubelet[3327]: E0123 17:59:41.241373 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.241857 kubelet[3327]: E0123 17:59:41.241692 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.241857 kubelet[3327]: W0123 17:59:41.241720 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.242024 kubelet[3327]: E0123 17:59:41.241992 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.242024 kubelet[3327]: W0123 17:59:41.242017 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.242128 kubelet[3327]: E0123 17:59:41.242060 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.242437 kubelet[3327]: E0123 17:59:41.242415 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.242637 kubelet[3327]: E0123 17:59:41.242610 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.242705 kubelet[3327]: W0123 17:59:41.242635 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.242778 kubelet[3327]: E0123 17:59:41.242702 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.243090 kubelet[3327]: E0123 17:59:41.243062 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.243172 kubelet[3327]: W0123 17:59:41.243088 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.243172 kubelet[3327]: E0123 17:59:41.243122 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.244069 kubelet[3327]: E0123 17:59:41.244031 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.244563 kubelet[3327]: W0123 17:59:41.244250 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.244563 kubelet[3327]: E0123 17:59:41.244303 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.245154 kubelet[3327]: E0123 17:59:41.245101 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.245441 kubelet[3327]: W0123 17:59:41.245247 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.245441 kubelet[3327]: E0123 17:59:41.245296 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.246122 kubelet[3327]: E0123 17:59:41.245977 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.246462 kubelet[3327]: W0123 17:59:41.246344 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.246462 kubelet[3327]: E0123 17:59:41.246411 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.247301 kubelet[3327]: E0123 17:59:41.247224 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.247301 kubelet[3327]: W0123 17:59:41.247256 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.247814 kubelet[3327]: E0123 17:59:41.247786 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.248584 kubelet[3327]: E0123 17:59:41.248496 3327 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 17:59:41.248846 kubelet[3327]: W0123 17:59:41.248699 3327 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 17:59:41.248846 kubelet[3327]: E0123 17:59:41.248733 3327 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 17:59:41.702530 containerd[1996]: time="2026-01-23T17:59:41.702421117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:41.705432 containerd[1996]: time="2026-01-23T17:59:41.705354589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 17:59:41.707954 containerd[1996]: time="2026-01-23T17:59:41.707846185Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:41.712317 containerd[1996]: time="2026-01-23T17:59:41.712234237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:41.713694 containerd[1996]: time="2026-01-23T17:59:41.713412649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.229707314s" Jan 23 17:59:41.713694 containerd[1996]: time="2026-01-23T17:59:41.713471185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 17:59:41.719784 containerd[1996]: time="2026-01-23T17:59:41.719452609Z" level=info msg="CreateContainer within sandbox \"5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 17:59:41.743830 containerd[1996]: time="2026-01-23T17:59:41.743773153Z" level=info msg="Container 73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:41.764019 containerd[1996]: time="2026-01-23T17:59:41.763935829Z" level=info msg="CreateContainer within sandbox \"5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6\"" Jan 23 17:59:41.764889 containerd[1996]: time="2026-01-23T17:59:41.764845705Z" level=info msg="StartContainer for \"73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6\"" Jan 23 17:59:41.768280 containerd[1996]: time="2026-01-23T17:59:41.768126961Z" level=info msg="connecting to shim 73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6" address="unix:///run/containerd/s/48ab08cb5a40e28b3d1fad90a74a70aee03389af59b920d564abb358f317739b" protocol=ttrpc version=3 Jan 23 17:59:41.811822 systemd[1]: Started cri-containerd-73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6.scope - libcontainer container 73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6. Jan 23 17:59:41.850366 kubelet[3327]: E0123 17:59:41.850199 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 17:59:41.949297 containerd[1996]: time="2026-01-23T17:59:41.949103006Z" level=info msg="StartContainer for \"73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6\" returns successfully" Jan 23 17:59:41.981831 systemd[1]: cri-containerd-73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6.scope: Deactivated successfully. Jan 23 17:59:41.989006 containerd[1996]: time="2026-01-23T17:59:41.988914099Z" level=info msg="received container exit event container_id:\"73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6\" id:\"73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6\" pid:4143 exited_at:{seconds:1769191181 nanos:986913915}" Jan 23 17:59:42.058088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73ed25d6a0f843ea62dd24f13b59324fbe80fdbae8011fed0b5dc9eabc500ed6-rootfs.mount: Deactivated successfully. Jan 23 17:59:43.129912 containerd[1996]: time="2026-01-23T17:59:43.129858000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 17:59:43.855169 kubelet[3327]: E0123 17:59:43.855108 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 17:59:45.850301 kubelet[3327]: E0123 17:59:45.850226 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 17:59:46.089879 containerd[1996]: time="2026-01-23T17:59:46.089791863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:46.092241 containerd[1996]: time="2026-01-23T17:59:46.091862319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 17:59:46.094718 containerd[1996]: time="2026-01-23T17:59:46.094559187Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:46.102666 containerd[1996]: time="2026-01-23T17:59:46.101240727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:46.104593 containerd[1996]: time="2026-01-23T17:59:46.104536503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.973764547s" Jan 23 17:59:46.105216 containerd[1996]: time="2026-01-23T17:59:46.105156687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 17:59:46.110188 containerd[1996]: time="2026-01-23T17:59:46.109869735Z" level=info msg="CreateContainer within sandbox \"5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 17:59:46.132843 containerd[1996]: time="2026-01-23T17:59:46.132775455Z" level=info msg="Container 61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:46.155827 containerd[1996]: time="2026-01-23T17:59:46.155741415Z" level=info msg="CreateContainer within sandbox \"5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff\"" Jan 23 17:59:46.156759 containerd[1996]: time="2026-01-23T17:59:46.156702003Z" level=info msg="StartContainer for \"61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff\"" Jan 23 17:59:46.160161 containerd[1996]: time="2026-01-23T17:59:46.160030203Z" level=info msg="connecting to shim 61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff" address="unix:///run/containerd/s/48ab08cb5a40e28b3d1fad90a74a70aee03389af59b920d564abb358f317739b" protocol=ttrpc version=3 Jan 23 17:59:46.205839 systemd[1]: Started cri-containerd-61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff.scope - libcontainer container 61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff. Jan 23 17:59:46.339179 containerd[1996]: time="2026-01-23T17:59:46.338946652Z" level=info msg="StartContainer for \"61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff\" returns successfully" Jan 23 17:59:47.338070 containerd[1996]: time="2026-01-23T17:59:47.338007185Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 17:59:47.343985 systemd[1]: cri-containerd-61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff.scope: Deactivated successfully. Jan 23 17:59:47.345701 systemd[1]: cri-containerd-61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff.scope: Consumed 987ms CPU time, 187.8M memory peak, 165.9M written to disk. Jan 23 17:59:47.352268 containerd[1996]: time="2026-01-23T17:59:47.352146617Z" level=info msg="received container exit event container_id:\"61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff\" id:\"61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff\" pid:4204 exited_at:{seconds:1769191187 nanos:351474701}" Jan 23 17:59:47.397616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61636aead27c994dcefa992faa2ff5914ff6d214a34f436034a1b558bfdbe9ff-rootfs.mount: Deactivated successfully. Jan 23 17:59:47.405098 kubelet[3327]: I0123 17:59:47.404565 3327 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 17:59:47.474924 systemd[1]: Created slice kubepods-burstable-pod29357653_d3ec_4227_8eb2_15d81c3dcd98.slice - libcontainer container kubepods-burstable-pod29357653_d3ec_4227_8eb2_15d81c3dcd98.slice. Jan 23 17:59:47.511368 systemd[1]: Created slice kubepods-besteffort-pod23974028_c047_4f8c_92ef_f4b897791230.slice - libcontainer container kubepods-besteffort-pod23974028_c047_4f8c_92ef_f4b897791230.slice. Jan 23 17:59:47.533839 systemd[1]: Created slice kubepods-burstable-podca2b2126_a623_4814_a5b3_b02ad64431ba.slice - libcontainer container kubepods-burstable-podca2b2126_a623_4814_a5b3_b02ad64431ba.slice. Jan 23 17:59:47.569707 systemd[1]: Created slice kubepods-besteffort-pod01be8348_3893_401c_b7b7_ba407784cdaf.slice - libcontainer container kubepods-besteffort-pod01be8348_3893_401c_b7b7_ba407784cdaf.slice. Jan 23 17:59:47.580826 kubelet[3327]: W0123 17:59:47.580758 3327 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ip-172-31-24-204" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-204' and this object Jan 23 17:59:47.581707 kubelet[3327]: E0123 17:59:47.581484 3327 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ip-172-31-24-204\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-204' and this object" logger="UnhandledError" Jan 23 17:59:47.582300 kubelet[3327]: W0123 17:59:47.582213 3327 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-24-204" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-204' and this object Jan 23 17:59:47.582927 kubelet[3327]: E0123 17:59:47.582771 3327 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-24-204\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-204' and this object" logger="UnhandledError" Jan 23 17:59:47.583408 kubelet[3327]: W0123 17:59:47.583273 3327 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-24-204" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-204' and this object Jan 23 17:59:47.584553 kubelet[3327]: E0123 17:59:47.583762 3327 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-24-204\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-204' and this object" logger="UnhandledError" Jan 23 17:59:47.584902 kubelet[3327]: W0123 17:59:47.584860 3327 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ip-172-31-24-204" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-204' and this object Jan 23 17:59:47.585071 kubelet[3327]: E0123 17:59:47.585036 3327 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-24-204\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-204' and this object" logger="UnhandledError" Jan 23 17:59:47.585317 kubelet[3327]: W0123 17:59:47.585271 3327 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ip-172-31-24-204" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-204' and this object Jan 23 17:59:47.585700 kubelet[3327]: E0123 17:59:47.585650 3327 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ip-172-31-24-204\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-204' and this object" logger="UnhandledError" Jan 23 17:59:47.593018 kubelet[3327]: I0123 17:59:47.592752 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5jq\" (UniqueName: \"kubernetes.io/projected/ca2b2126-a623-4814-a5b3-b02ad64431ba-kube-api-access-bf5jq\") pod \"coredns-668d6bf9bc-w5lcf\" (UID: \"ca2b2126-a623-4814-a5b3-b02ad64431ba\") " pod="kube-system/coredns-668d6bf9bc-w5lcf" Jan 23 17:59:47.593882 kubelet[3327]: I0123 17:59:47.593214 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2b2126-a623-4814-a5b3-b02ad64431ba-config-volume\") pod \"coredns-668d6bf9bc-w5lcf\" (UID: \"ca2b2126-a623-4814-a5b3-b02ad64431ba\") " pod="kube-system/coredns-668d6bf9bc-w5lcf" Jan 23 17:59:47.594259 kubelet[3327]: I0123 17:59:47.593313 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29357653-d3ec-4227-8eb2-15d81c3dcd98-config-volume\") pod \"coredns-668d6bf9bc-k29vr\" (UID: \"29357653-d3ec-4227-8eb2-15d81c3dcd98\") " pod="kube-system/coredns-668d6bf9bc-k29vr" Jan 23 17:59:47.595551 kubelet[3327]: I0123 17:59:47.594348 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23974028-c047-4f8c-92ef-f4b897791230-tigera-ca-bundle\") pod \"calico-kube-controllers-77f8ffc4dc-6h2ph\" (UID: \"23974028-c047-4f8c-92ef-f4b897791230\") " pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" Jan 23 17:59:47.597316 kubelet[3327]: I0123 17:59:47.596431 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgtnm\" (UniqueName: \"kubernetes.io/projected/23974028-c047-4f8c-92ef-f4b897791230-kube-api-access-jgtnm\") pod \"calico-kube-controllers-77f8ffc4dc-6h2ph\" (UID: \"23974028-c047-4f8c-92ef-f4b897791230\") " pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" Jan 23 17:59:47.597316 kubelet[3327]: I0123 17:59:47.596655 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qb2c\" (UniqueName: \"kubernetes.io/projected/29357653-d3ec-4227-8eb2-15d81c3dcd98-kube-api-access-8qb2c\") pod \"coredns-668d6bf9bc-k29vr\" (UID: \"29357653-d3ec-4227-8eb2-15d81c3dcd98\") " pod="kube-system/coredns-668d6bf9bc-k29vr" Jan 23 17:59:47.608688 systemd[1]: Created slice kubepods-besteffort-podf787ec8c_40de_479c_b75d_f3d24f6583cc.slice - libcontainer container kubepods-besteffort-podf787ec8c_40de_479c_b75d_f3d24f6583cc.slice. Jan 23 17:59:47.633541 systemd[1]: Created slice kubepods-besteffort-pode94a1ef1_631e_4bed_b300_2c431484cc06.slice - libcontainer container kubepods-besteffort-pode94a1ef1_631e_4bed_b300_2c431484cc06.slice. Jan 23 17:59:47.648451 systemd[1]: Created slice kubepods-besteffort-pod169e9a61_dd6f_4dcb_a857_0adba680dfb0.slice - libcontainer container kubepods-besteffort-pod169e9a61_dd6f_4dcb_a857_0adba680dfb0.slice. Jan 23 17:59:47.697343 kubelet[3327]: I0123 17:59:47.696869 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-ca-bundle\") pod \"whisker-548498fc5b-sjj6m\" (UID: \"e94a1ef1-631e-4bed-b300-2c431484cc06\") " pod="calico-system/whisker-548498fc5b-sjj6m" Jan 23 17:59:47.697343 kubelet[3327]: I0123 17:59:47.696933 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp824\" (UniqueName: \"kubernetes.io/projected/01be8348-3893-401c-b7b7-ba407784cdaf-kube-api-access-cp824\") pod \"calico-apiserver-8464998c88-xdthn\" (UID: \"01be8348-3893-401c-b7b7-ba407784cdaf\") " pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" Jan 23 17:59:47.697343 kubelet[3327]: I0123 17:59:47.696971 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztspz\" (UniqueName: \"kubernetes.io/projected/169e9a61-dd6f-4dcb-a857-0adba680dfb0-kube-api-access-ztspz\") pod \"goldmane-666569f655-82cnj\" (UID: \"169e9a61-dd6f-4dcb-a857-0adba680dfb0\") " pod="calico-system/goldmane-666569f655-82cnj" Jan 23 17:59:47.697343 kubelet[3327]: I0123 17:59:47.697014 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f787ec8c-40de-479c-b75d-f3d24f6583cc-calico-apiserver-certs\") pod \"calico-apiserver-8464998c88-nzr4p\" (UID: \"f787ec8c-40de-479c-b75d-f3d24f6583cc\") " pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" Jan 23 17:59:47.697343 kubelet[3327]: I0123 17:59:47.697054 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46r6\" (UniqueName: \"kubernetes.io/projected/f787ec8c-40de-479c-b75d-f3d24f6583cc-kube-api-access-z46r6\") pod \"calico-apiserver-8464998c88-nzr4p\" (UID: \"f787ec8c-40de-479c-b75d-f3d24f6583cc\") " pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" Jan 23 17:59:47.698114 kubelet[3327]: I0123 17:59:47.697134 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/169e9a61-dd6f-4dcb-a857-0adba680dfb0-goldmane-ca-bundle\") pod \"goldmane-666569f655-82cnj\" (UID: \"169e9a61-dd6f-4dcb-a857-0adba680dfb0\") " pod="calico-system/goldmane-666569f655-82cnj" Jan 23 17:59:47.698114 kubelet[3327]: I0123 17:59:47.697169 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/169e9a61-dd6f-4dcb-a857-0adba680dfb0-goldmane-key-pair\") pod \"goldmane-666569f655-82cnj\" (UID: \"169e9a61-dd6f-4dcb-a857-0adba680dfb0\") " pod="calico-system/goldmane-666569f655-82cnj" Jan 23 17:59:47.698114 kubelet[3327]: I0123 17:59:47.697231 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-backend-key-pair\") pod \"whisker-548498fc5b-sjj6m\" (UID: \"e94a1ef1-631e-4bed-b300-2c431484cc06\") " pod="calico-system/whisker-548498fc5b-sjj6m" Jan 23 17:59:47.698114 kubelet[3327]: I0123 17:59:47.697272 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169e9a61-dd6f-4dcb-a857-0adba680dfb0-config\") pod \"goldmane-666569f655-82cnj\" (UID: \"169e9a61-dd6f-4dcb-a857-0adba680dfb0\") " pod="calico-system/goldmane-666569f655-82cnj" Jan 23 17:59:47.698714 kubelet[3327]: I0123 17:59:47.698133 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/01be8348-3893-401c-b7b7-ba407784cdaf-calico-apiserver-certs\") pod \"calico-apiserver-8464998c88-xdthn\" (UID: \"01be8348-3893-401c-b7b7-ba407784cdaf\") " pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" Jan 23 17:59:47.698714 kubelet[3327]: I0123 17:59:47.698252 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksnpk\" (UniqueName: \"kubernetes.io/projected/e94a1ef1-631e-4bed-b300-2c431484cc06-kube-api-access-ksnpk\") pod \"whisker-548498fc5b-sjj6m\" (UID: \"e94a1ef1-631e-4bed-b300-2c431484cc06\") " pod="calico-system/whisker-548498fc5b-sjj6m" Jan 23 17:59:47.793772 containerd[1996]: time="2026-01-23T17:59:47.793458775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29vr,Uid:29357653-d3ec-4227-8eb2-15d81c3dcd98,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:47.829787 containerd[1996]: time="2026-01-23T17:59:47.829704740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f8ffc4dc-6h2ph,Uid:23974028-c047-4f8c-92ef-f4b897791230,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:47.848475 containerd[1996]: time="2026-01-23T17:59:47.848090624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w5lcf,Uid:ca2b2126-a623-4814-a5b3-b02ad64431ba,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:47.876818 systemd[1]: Created slice kubepods-besteffort-podbd861cd6_0ac7_4fc8_b917_14516a6e2c66.slice - libcontainer container kubepods-besteffort-podbd861cd6_0ac7_4fc8_b917_14516a6e2c66.slice. Jan 23 17:59:47.882019 containerd[1996]: time="2026-01-23T17:59:47.881956304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzd8d,Uid:bd861cd6-0ac7-4fc8-b917-14516a6e2c66,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:47.900964 containerd[1996]: time="2026-01-23T17:59:47.900913544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8464998c88-xdthn,Uid:01be8348-3893-401c-b7b7-ba407784cdaf,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:59:47.922491 containerd[1996]: time="2026-01-23T17:59:47.922433204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8464998c88-nzr4p,Uid:f787ec8c-40de-479c-b75d-f3d24f6583cc,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:59:48.208107 containerd[1996]: time="2026-01-23T17:59:48.206489129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 17:59:48.321767 containerd[1996]: time="2026-01-23T17:59:48.321700398Z" level=error msg="Failed to destroy network for sandbox \"e683a90564a6ce6ada57948528646ef5c838462c66be6d507813f63ed8201634\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.324137 containerd[1996]: time="2026-01-23T17:59:48.324061050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29vr,Uid:29357653-d3ec-4227-8eb2-15d81c3dcd98,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e683a90564a6ce6ada57948528646ef5c838462c66be6d507813f63ed8201634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.334540 kubelet[3327]: E0123 17:59:48.334097 3327 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e683a90564a6ce6ada57948528646ef5c838462c66be6d507813f63ed8201634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.334540 kubelet[3327]: E0123 17:59:48.334221 3327 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e683a90564a6ce6ada57948528646ef5c838462c66be6d507813f63ed8201634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k29vr" Jan 23 17:59:48.335609 kubelet[3327]: E0123 17:59:48.334260 3327 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e683a90564a6ce6ada57948528646ef5c838462c66be6d507813f63ed8201634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k29vr" Jan 23 17:59:48.335770 kubelet[3327]: E0123 17:59:48.335684 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k29vr_kube-system(29357653-d3ec-4227-8eb2-15d81c3dcd98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k29vr_kube-system(29357653-d3ec-4227-8eb2-15d81c3dcd98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e683a90564a6ce6ada57948528646ef5c838462c66be6d507813f63ed8201634\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k29vr" podUID="29357653-d3ec-4227-8eb2-15d81c3dcd98" Jan 23 17:59:48.364154 containerd[1996]: time="2026-01-23T17:59:48.364092570Z" level=error msg="Failed to destroy network for sandbox \"bb9140908fc55c1da6f236a5c454caef8b0047af1c54902d6b4cd1dd3dd4eeca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.366346 containerd[1996]: time="2026-01-23T17:59:48.366194226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w5lcf,Uid:ca2b2126-a623-4814-a5b3-b02ad64431ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9140908fc55c1da6f236a5c454caef8b0047af1c54902d6b4cd1dd3dd4eeca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.366926 kubelet[3327]: E0123 17:59:48.366574 3327 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9140908fc55c1da6f236a5c454caef8b0047af1c54902d6b4cd1dd3dd4eeca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.366926 kubelet[3327]: E0123 17:59:48.366655 3327 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9140908fc55c1da6f236a5c454caef8b0047af1c54902d6b4cd1dd3dd4eeca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w5lcf" Jan 23 17:59:48.366926 kubelet[3327]: E0123 17:59:48.366690 3327 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb9140908fc55c1da6f236a5c454caef8b0047af1c54902d6b4cd1dd3dd4eeca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w5lcf" Jan 23 17:59:48.367981 kubelet[3327]: E0123 17:59:48.366768 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w5lcf_kube-system(ca2b2126-a623-4814-a5b3-b02ad64431ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w5lcf_kube-system(ca2b2126-a623-4814-a5b3-b02ad64431ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb9140908fc55c1da6f236a5c454caef8b0047af1c54902d6b4cd1dd3dd4eeca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w5lcf" podUID="ca2b2126-a623-4814-a5b3-b02ad64431ba" Jan 23 17:59:48.405747 containerd[1996]: time="2026-01-23T17:59:48.405651450Z" level=error msg="Failed to destroy network for sandbox \"0eddb439bd628f56b9a3d1a8d12034a01ac1d3a6e06f5f95e61b8b05b5a670f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.412638 containerd[1996]: time="2026-01-23T17:59:48.412305750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f8ffc4dc-6h2ph,Uid:23974028-c047-4f8c-92ef-f4b897791230,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eddb439bd628f56b9a3d1a8d12034a01ac1d3a6e06f5f95e61b8b05b5a670f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.427325 kubelet[3327]: E0123 17:59:48.422703 3327 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eddb439bd628f56b9a3d1a8d12034a01ac1d3a6e06f5f95e61b8b05b5a670f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.427325 kubelet[3327]: E0123 17:59:48.422877 3327 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eddb439bd628f56b9a3d1a8d12034a01ac1d3a6e06f5f95e61b8b05b5a670f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" Jan 23 17:59:48.427325 kubelet[3327]: E0123 17:59:48.422916 3327 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eddb439bd628f56b9a3d1a8d12034a01ac1d3a6e06f5f95e61b8b05b5a670f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" Jan 23 17:59:48.426594 systemd[1]: run-netns-cni\x2da41f2a3f\x2db99b\x2d7448\x2d0852\x2d5d5067feb94f.mount: Deactivated successfully. Jan 23 17:59:48.428313 kubelet[3327]: E0123 17:59:48.422995 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77f8ffc4dc-6h2ph_calico-system(23974028-c047-4f8c-92ef-f4b897791230)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77f8ffc4dc-6h2ph_calico-system(23974028-c047-4f8c-92ef-f4b897791230)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0eddb439bd628f56b9a3d1a8d12034a01ac1d3a6e06f5f95e61b8b05b5a670f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 17:59:48.444534 containerd[1996]: time="2026-01-23T17:59:48.443708827Z" level=error msg="Failed to destroy network for sandbox \"f51633189ac986b5098458a415b1e4f03f1304896cfec4953e7fa6f4a63f02e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.448591 containerd[1996]: time="2026-01-23T17:59:48.447770263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8464998c88-nzr4p,Uid:f787ec8c-40de-479c-b75d-f3d24f6583cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51633189ac986b5098458a415b1e4f03f1304896cfec4953e7fa6f4a63f02e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.451645 kubelet[3327]: E0123 17:59:48.449926 3327 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51633189ac986b5098458a415b1e4f03f1304896cfec4953e7fa6f4a63f02e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.451645 kubelet[3327]: E0123 17:59:48.450023 3327 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51633189ac986b5098458a415b1e4f03f1304896cfec4953e7fa6f4a63f02e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" Jan 23 17:59:48.451645 kubelet[3327]: E0123 17:59:48.450061 3327 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f51633189ac986b5098458a415b1e4f03f1304896cfec4953e7fa6f4a63f02e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" Jan 23 17:59:48.451884 kubelet[3327]: E0123 17:59:48.450133 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8464998c88-nzr4p_calico-apiserver(f787ec8c-40de-479c-b75d-f3d24f6583cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8464998c88-nzr4p_calico-apiserver(f787ec8c-40de-479c-b75d-f3d24f6583cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f51633189ac986b5098458a415b1e4f03f1304896cfec4953e7fa6f4a63f02e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 17:59:48.451745 systemd[1]: run-netns-cni\x2d28a45147\x2d96ec\x2d275d\x2d154e\x2dabff7abfb7c5.mount: Deactivated successfully. Jan 23 17:59:48.458562 containerd[1996]: time="2026-01-23T17:59:48.457680895Z" level=error msg="Failed to destroy network for sandbox \"caf60c6b79267815c4688f19df574b49e896315519b7187c92eea07f439596ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.466272 systemd[1]: run-netns-cni\x2dcc7c146d\x2d4644\x2de5f8\x2d7526\x2da6ebcf17ec88.mount: Deactivated successfully. Jan 23 17:59:48.469570 containerd[1996]: time="2026-01-23T17:59:48.468833515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8464998c88-xdthn,Uid:01be8348-3893-401c-b7b7-ba407784cdaf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf60c6b79267815c4688f19df574b49e896315519b7187c92eea07f439596ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.469745 kubelet[3327]: E0123 17:59:48.469494 3327 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf60c6b79267815c4688f19df574b49e896315519b7187c92eea07f439596ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.469745 kubelet[3327]: E0123 17:59:48.469598 3327 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf60c6b79267815c4688f19df574b49e896315519b7187c92eea07f439596ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" Jan 23 17:59:48.469745 kubelet[3327]: E0123 17:59:48.469632 3327 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"caf60c6b79267815c4688f19df574b49e896315519b7187c92eea07f439596ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" Jan 23 17:59:48.472056 kubelet[3327]: E0123 17:59:48.469705 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8464998c88-xdthn_calico-apiserver(01be8348-3893-401c-b7b7-ba407784cdaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8464998c88-xdthn_calico-apiserver(01be8348-3893-401c-b7b7-ba407784cdaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"caf60c6b79267815c4688f19df574b49e896315519b7187c92eea07f439596ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 17:59:48.472787 containerd[1996]: time="2026-01-23T17:59:48.472547443Z" level=error msg="Failed to destroy network for sandbox \"8720fbe287c6ac206774f681d8e4fc40f047cbb911558f95ef1e51160178cf51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.476176 containerd[1996]: time="2026-01-23T17:59:48.476000371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzd8d,Uid:bd861cd6-0ac7-4fc8-b917-14516a6e2c66,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8720fbe287c6ac206774f681d8e4fc40f047cbb911558f95ef1e51160178cf51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.478712 kubelet[3327]: E0123 17:59:48.476342 3327 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8720fbe287c6ac206774f681d8e4fc40f047cbb911558f95ef1e51160178cf51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:48.478712 kubelet[3327]: E0123 17:59:48.476422 3327 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8720fbe287c6ac206774f681d8e4fc40f047cbb911558f95ef1e51160178cf51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzd8d" Jan 23 17:59:48.478712 kubelet[3327]: E0123 17:59:48.476454 3327 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8720fbe287c6ac206774f681d8e4fc40f047cbb911558f95ef1e51160178cf51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzd8d" Jan 23 17:59:48.478243 systemd[1]: run-netns-cni\x2d1bd487f5\x2da887\x2d0917\x2d826d\x2d4d070a1fd932.mount: Deactivated successfully. Jan 23 17:59:48.479031 kubelet[3327]: E0123 17:59:48.477012 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8720fbe287c6ac206774f681d8e4fc40f047cbb911558f95ef1e51160178cf51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 17:59:48.801540 kubelet[3327]: E0123 17:59:48.800723 3327 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 23 17:59:48.801540 kubelet[3327]: E0123 17:59:48.800827 3327 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/169e9a61-dd6f-4dcb-a857-0adba680dfb0-goldmane-key-pair podName:169e9a61-dd6f-4dcb-a857-0adba680dfb0 nodeName:}" failed. No retries permitted until 2026-01-23 17:59:49.300801708 +0000 UTC m=+45.838188882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/169e9a61-dd6f-4dcb-a857-0adba680dfb0-goldmane-key-pair") pod "goldmane-666569f655-82cnj" (UID: "169e9a61-dd6f-4dcb-a857-0adba680dfb0") : failed to sync secret cache: timed out waiting for the condition Jan 23 17:59:48.801540 kubelet[3327]: E0123 17:59:48.801124 3327 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:59:48.801540 kubelet[3327]: E0123 17:59:48.801180 3327 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-ca-bundle podName:e94a1ef1-631e-4bed-b300-2c431484cc06 nodeName:}" failed. No retries permitted until 2026-01-23 17:59:49.301163172 +0000 UTC m=+45.838550334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-ca-bundle") pod "whisker-548498fc5b-sjj6m" (UID: "e94a1ef1-631e-4bed-b300-2c431484cc06") : failed to sync configmap cache: timed out waiting for the condition Jan 23 17:59:48.801540 kubelet[3327]: E0123 17:59:48.801211 3327 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 23 17:59:48.801980 kubelet[3327]: E0123 17:59:48.801253 3327 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/169e9a61-dd6f-4dcb-a857-0adba680dfb0-goldmane-ca-bundle podName:169e9a61-dd6f-4dcb-a857-0adba680dfb0 nodeName:}" failed. No retries permitted until 2026-01-23 17:59:49.30123852 +0000 UTC m=+45.838625682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/169e9a61-dd6f-4dcb-a857-0adba680dfb0-goldmane-ca-bundle") pod "goldmane-666569f655-82cnj" (UID: "169e9a61-dd6f-4dcb-a857-0adba680dfb0") : failed to sync configmap cache: timed out waiting for the condition Jan 23 17:59:49.445086 containerd[1996]: time="2026-01-23T17:59:49.444998540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548498fc5b-sjj6m,Uid:e94a1ef1-631e-4bed-b300-2c431484cc06,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:49.466709 containerd[1996]: time="2026-01-23T17:59:49.466206104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-82cnj,Uid:169e9a61-dd6f-4dcb-a857-0adba680dfb0,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:49.623689 containerd[1996]: time="2026-01-23T17:59:49.623465481Z" level=error msg="Failed to destroy network for sandbox \"4c21fc61e7bc20da8567e03ccab9a880944bd351eb2e021f67cd91fe0e0a4fb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:49.628531 containerd[1996]: time="2026-01-23T17:59:49.627192441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548498fc5b-sjj6m,Uid:e94a1ef1-631e-4bed-b300-2c431484cc06,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c21fc61e7bc20da8567e03ccab9a880944bd351eb2e021f67cd91fe0e0a4fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:49.629111 kubelet[3327]: E0123 17:59:49.629038 3327 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c21fc61e7bc20da8567e03ccab9a880944bd351eb2e021f67cd91fe0e0a4fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:49.633708 kubelet[3327]: E0123 17:59:49.629122 3327 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c21fc61e7bc20da8567e03ccab9a880944bd351eb2e021f67cd91fe0e0a4fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548498fc5b-sjj6m" Jan 23 17:59:49.633708 kubelet[3327]: E0123 17:59:49.629157 3327 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c21fc61e7bc20da8567e03ccab9a880944bd351eb2e021f67cd91fe0e0a4fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548498fc5b-sjj6m" Jan 23 17:59:49.633708 kubelet[3327]: E0123 17:59:49.629217 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-548498fc5b-sjj6m_calico-system(e94a1ef1-631e-4bed-b300-2c431484cc06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-548498fc5b-sjj6m_calico-system(e94a1ef1-631e-4bed-b300-2c431484cc06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c21fc61e7bc20da8567e03ccab9a880944bd351eb2e021f67cd91fe0e0a4fb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-548498fc5b-sjj6m" podUID="e94a1ef1-631e-4bed-b300-2c431484cc06" Jan 23 17:59:49.630755 systemd[1]: run-netns-cni\x2d9f3f3165\x2d6663\x2d9e93\x2dd7a9\x2ddb4cd7aeff28.mount: Deactivated successfully. Jan 23 17:59:49.654254 containerd[1996]: time="2026-01-23T17:59:49.654187725Z" level=error msg="Failed to destroy network for sandbox \"c5a2c4a071ae36f23660f696b288abd7f22edf873c4d7acd472d1865735b6a03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:49.662391 containerd[1996]: time="2026-01-23T17:59:49.661731357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-82cnj,Uid:169e9a61-dd6f-4dcb-a857-0adba680dfb0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a2c4a071ae36f23660f696b288abd7f22edf873c4d7acd472d1865735b6a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:49.664106 kubelet[3327]: E0123 17:59:49.663986 3327 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a2c4a071ae36f23660f696b288abd7f22edf873c4d7acd472d1865735b6a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 17:59:49.664106 kubelet[3327]: E0123 17:59:49.664071 3327 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a2c4a071ae36f23660f696b288abd7f22edf873c4d7acd472d1865735b6a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-82cnj" Jan 23 17:59:49.664302 kubelet[3327]: E0123 17:59:49.664117 3327 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a2c4a071ae36f23660f696b288abd7f22edf873c4d7acd472d1865735b6a03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-82cnj" Jan 23 17:59:49.664984 systemd[1]: run-netns-cni\x2d97790b21\x2d98c4\x2dd04d\x2d91aa\x2db9977766fefe.mount: Deactivated successfully. Jan 23 17:59:49.666364 kubelet[3327]: E0123 17:59:49.664199 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-82cnj_calico-system(169e9a61-dd6f-4dcb-a857-0adba680dfb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-82cnj_calico-system(169e9a61-dd6f-4dcb-a857-0adba680dfb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5a2c4a071ae36f23660f696b288abd7f22edf873c4d7acd472d1865735b6a03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 17:59:54.485413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165599709.mount: Deactivated successfully. Jan 23 17:59:54.548237 containerd[1996]: time="2026-01-23T17:59:54.548143345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:54.550223 containerd[1996]: time="2026-01-23T17:59:54.550153261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 17:59:54.551730 containerd[1996]: time="2026-01-23T17:59:54.551658901Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:54.564219 containerd[1996]: time="2026-01-23T17:59:54.564104245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 17:59:54.565351 containerd[1996]: time="2026-01-23T17:59:54.565231261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.357569264s" Jan 23 17:59:54.565351 containerd[1996]: time="2026-01-23T17:59:54.565303765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 17:59:54.595722 containerd[1996]: time="2026-01-23T17:59:54.595656433Z" level=info msg="CreateContainer within sandbox \"5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 17:59:54.619319 containerd[1996]: time="2026-01-23T17:59:54.618744433Z" level=info msg="Container 308c67cd58333b78fc0e8d779369ed30b31257ea91983a0539abd61bd18f95d1: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:54.638526 containerd[1996]: time="2026-01-23T17:59:54.638449657Z" level=info msg="CreateContainer within sandbox \"5680d7844c2cc1f25548342ae789a68da3588ccc53340a7c7276e430fe412ee7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"308c67cd58333b78fc0e8d779369ed30b31257ea91983a0539abd61bd18f95d1\"" Jan 23 17:59:54.639664 containerd[1996]: time="2026-01-23T17:59:54.639479725Z" level=info msg="StartContainer for \"308c67cd58333b78fc0e8d779369ed30b31257ea91983a0539abd61bd18f95d1\"" Jan 23 17:59:54.642532 containerd[1996]: time="2026-01-23T17:59:54.642417073Z" level=info msg="connecting to shim 308c67cd58333b78fc0e8d779369ed30b31257ea91983a0539abd61bd18f95d1" address="unix:///run/containerd/s/48ab08cb5a40e28b3d1fad90a74a70aee03389af59b920d564abb358f317739b" protocol=ttrpc version=3 Jan 23 17:59:54.691822 systemd[1]: Started cri-containerd-308c67cd58333b78fc0e8d779369ed30b31257ea91983a0539abd61bd18f95d1.scope - libcontainer container 308c67cd58333b78fc0e8d779369ed30b31257ea91983a0539abd61bd18f95d1. Jan 23 17:59:54.821061 containerd[1996]: time="2026-01-23T17:59:54.820824062Z" level=info msg="StartContainer for \"308c67cd58333b78fc0e8d779369ed30b31257ea91983a0539abd61bd18f95d1\" returns successfully" Jan 23 17:59:55.093002 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 17:59:55.093131 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 17:59:55.380951 kubelet[3327]: I0123 17:59:55.380105 3327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zv2ff" podStartSLOduration=2.266082837 podStartE2EDuration="18.380063317s" podCreationTimestamp="2026-01-23 17:59:37 +0000 UTC" firstStartedPulling="2026-01-23 17:59:38.453489465 +0000 UTC m=+34.990876627" lastFinishedPulling="2026-01-23 17:59:54.567469945 +0000 UTC m=+51.104857107" observedRunningTime="2026-01-23 17:59:55.295791817 +0000 UTC m=+51.833179003" watchObservedRunningTime="2026-01-23 17:59:55.380063317 +0000 UTC m=+51.917450503" Jan 23 17:59:55.568238 kubelet[3327]: I0123 17:59:55.567746 3327 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-backend-key-pair\") pod \"e94a1ef1-631e-4bed-b300-2c431484cc06\" (UID: \"e94a1ef1-631e-4bed-b300-2c431484cc06\") " Jan 23 17:59:55.569446 kubelet[3327]: I0123 17:59:55.569350 3327 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-ca-bundle\") pod \"e94a1ef1-631e-4bed-b300-2c431484cc06\" (UID: \"e94a1ef1-631e-4bed-b300-2c431484cc06\") " Jan 23 17:59:55.569798 kubelet[3327]: I0123 17:59:55.569745 3327 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksnpk\" (UniqueName: \"kubernetes.io/projected/e94a1ef1-631e-4bed-b300-2c431484cc06-kube-api-access-ksnpk\") pod \"e94a1ef1-631e-4bed-b300-2c431484cc06\" (UID: \"e94a1ef1-631e-4bed-b300-2c431484cc06\") " Jan 23 17:59:55.583467 kubelet[3327]: I0123 17:59:55.583411 3327 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e94a1ef1-631e-4bed-b300-2c431484cc06" (UID: "e94a1ef1-631e-4bed-b300-2c431484cc06"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 17:59:55.584977 kubelet[3327]: I0123 17:59:55.584894 3327 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e94a1ef1-631e-4bed-b300-2c431484cc06-kube-api-access-ksnpk" (OuterVolumeSpecName: "kube-api-access-ksnpk") pod "e94a1ef1-631e-4bed-b300-2c431484cc06" (UID: "e94a1ef1-631e-4bed-b300-2c431484cc06"). InnerVolumeSpecName "kube-api-access-ksnpk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 17:59:55.585873 kubelet[3327]: I0123 17:59:55.585804 3327 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e94a1ef1-631e-4bed-b300-2c431484cc06" (UID: "e94a1ef1-631e-4bed-b300-2c431484cc06"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 17:59:55.590730 systemd[1]: var-lib-kubelet-pods-e94a1ef1\x2d631e\x2d4bed\x2db300\x2d2c431484cc06-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 17:59:55.601690 systemd[1]: var-lib-kubelet-pods-e94a1ef1\x2d631e\x2d4bed\x2db300\x2d2c431484cc06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dksnpk.mount: Deactivated successfully. Jan 23 17:59:55.671766 kubelet[3327]: I0123 17:59:55.671417 3327 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ksnpk\" (UniqueName: \"kubernetes.io/projected/e94a1ef1-631e-4bed-b300-2c431484cc06-kube-api-access-ksnpk\") on node \"ip-172-31-24-204\" DevicePath \"\"" Jan 23 17:59:55.671766 kubelet[3327]: I0123 17:59:55.671617 3327 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-backend-key-pair\") on node \"ip-172-31-24-204\" DevicePath \"\"" Jan 23 17:59:55.671766 kubelet[3327]: I0123 17:59:55.671643 3327 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e94a1ef1-631e-4bed-b300-2c431484cc06-whisker-ca-bundle\") on node \"ip-172-31-24-204\" DevicePath \"\"" Jan 23 17:59:55.878581 systemd[1]: Removed slice kubepods-besteffort-pode94a1ef1_631e_4bed_b300_2c431484cc06.slice - libcontainer container kubepods-besteffort-pode94a1ef1_631e_4bed_b300_2c431484cc06.slice. Jan 23 17:59:56.364740 systemd[1]: Created slice kubepods-besteffort-pod1e389e39_d560_4d57_90e1_c702cef458f5.slice - libcontainer container kubepods-besteffort-pod1e389e39_d560_4d57_90e1_c702cef458f5.slice. Jan 23 17:59:56.479051 kubelet[3327]: I0123 17:59:56.478981 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkc5\" (UniqueName: \"kubernetes.io/projected/1e389e39-d560-4d57-90e1-c702cef458f5-kube-api-access-5tkc5\") pod \"whisker-68f8bc678b-vc2z8\" (UID: \"1e389e39-d560-4d57-90e1-c702cef458f5\") " pod="calico-system/whisker-68f8bc678b-vc2z8" Jan 23 17:59:56.480391 kubelet[3327]: I0123 17:59:56.479066 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1e389e39-d560-4d57-90e1-c702cef458f5-whisker-backend-key-pair\") pod \"whisker-68f8bc678b-vc2z8\" (UID: \"1e389e39-d560-4d57-90e1-c702cef458f5\") " pod="calico-system/whisker-68f8bc678b-vc2z8" Jan 23 17:59:56.480391 kubelet[3327]: I0123 17:59:56.479110 3327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e389e39-d560-4d57-90e1-c702cef458f5-whisker-ca-bundle\") pod \"whisker-68f8bc678b-vc2z8\" (UID: \"1e389e39-d560-4d57-90e1-c702cef458f5\") " pod="calico-system/whisker-68f8bc678b-vc2z8" Jan 23 17:59:56.675230 containerd[1996]: time="2026-01-23T17:59:56.675073516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68f8bc678b-vc2z8,Uid:1e389e39-d560-4d57-90e1-c702cef458f5,Namespace:calico-system,Attempt:0,}" Jan 23 17:59:56.986417 (udev-worker)[4488]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:59:56.987114 systemd-networkd[1894]: cali60153cecad7: Link UP Jan 23 17:59:56.987430 systemd-networkd[1894]: cali60153cecad7: Gained carrier Jan 23 17:59:57.025209 containerd[1996]: 2026-01-23 17:59:56.726 [INFO][4566] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 17:59:57.025209 containerd[1996]: 2026-01-23 17:59:56.810 [INFO][4566] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0 whisker-68f8bc678b- calico-system 1e389e39-d560-4d57-90e1-c702cef458f5 902 0 2026-01-23 17:59:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68f8bc678b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-204 whisker-68f8bc678b-vc2z8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali60153cecad7 [] [] }} ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Namespace="calico-system" Pod="whisker-68f8bc678b-vc2z8" WorkloadEndpoint="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-" Jan 23 17:59:57.025209 containerd[1996]: 2026-01-23 17:59:56.810 [INFO][4566] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Namespace="calico-system" Pod="whisker-68f8bc678b-vc2z8" WorkloadEndpoint="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" Jan 23 17:59:57.025209 containerd[1996]: 2026-01-23 17:59:56.895 [INFO][4577] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" HandleID="k8s-pod-network.05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Workload="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" Jan 23 17:59:57.025565 containerd[1996]: 2026-01-23 17:59:56.896 [INFO][4577] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" HandleID="k8s-pod-network.05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Workload="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-204", "pod":"whisker-68f8bc678b-vc2z8", "timestamp":"2026-01-23 17:59:56.895757009 +0000 UTC"}, Hostname:"ip-172-31-24-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:57.025565 containerd[1996]: 2026-01-23 17:59:56.896 [INFO][4577] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:57.025565 containerd[1996]: 2026-01-23 17:59:56.896 [INFO][4577] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:57.025565 containerd[1996]: 2026-01-23 17:59:56.896 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-204' Jan 23 17:59:57.025565 containerd[1996]: 2026-01-23 17:59:56.912 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" host="ip-172-31-24-204" Jan 23 17:59:57.025565 containerd[1996]: 2026-01-23 17:59:56.923 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-204" Jan 23 17:59:57.025565 containerd[1996]: 2026-01-23 17:59:56.932 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.111.64/26 host="ip-172-31-24-204" Jan 23 17:59:57.025565 containerd[1996]: 2026-01-23 17:59:56.938 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 17:59:57.025565 containerd[1996]: 2026-01-23 17:59:56.942 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 17:59:57.025994 containerd[1996]: 2026-01-23 17:59:56.942 [INFO][4577] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" host="ip-172-31-24-204" Jan 23 17:59:57.025994 containerd[1996]: 2026-01-23 17:59:56.945 [INFO][4577] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1 Jan 23 17:59:57.025994 containerd[1996]: 2026-01-23 17:59:56.952 [INFO][4577] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" host="ip-172-31-24-204" Jan 23 17:59:57.025994 containerd[1996]: 2026-01-23 17:59:56.965 [INFO][4577] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.65/26] block=192.168.111.64/26 handle="k8s-pod-network.05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" host="ip-172-31-24-204" Jan 23 17:59:57.025994 containerd[1996]: 2026-01-23 17:59:56.965 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.65/26] handle="k8s-pod-network.05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" host="ip-172-31-24-204" Jan 23 17:59:57.025994 containerd[1996]: 2026-01-23 17:59:56.965 [INFO][4577] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:57.025994 containerd[1996]: 2026-01-23 17:59:56.965 [INFO][4577] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.65/26] IPv6=[] ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" HandleID="k8s-pod-network.05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Workload="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" Jan 23 17:59:57.026518 containerd[1996]: 2026-01-23 17:59:56.973 [INFO][4566] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Namespace="calico-system" Pod="whisker-68f8bc678b-vc2z8" WorkloadEndpoint="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0", GenerateName:"whisker-68f8bc678b-", Namespace:"calico-system", SelfLink:"", UID:"1e389e39-d560-4d57-90e1-c702cef458f5", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68f8bc678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"", Pod:"whisker-68f8bc678b-vc2z8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali60153cecad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:57.026518 containerd[1996]: 2026-01-23 17:59:56.973 [INFO][4566] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.65/32] ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Namespace="calico-system" Pod="whisker-68f8bc678b-vc2z8" WorkloadEndpoint="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" Jan 23 17:59:57.026733 containerd[1996]: 2026-01-23 17:59:56.973 [INFO][4566] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60153cecad7 ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Namespace="calico-system" Pod="whisker-68f8bc678b-vc2z8" WorkloadEndpoint="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" Jan 23 17:59:57.026733 containerd[1996]: 2026-01-23 17:59:56.988 [INFO][4566] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Namespace="calico-system" Pod="whisker-68f8bc678b-vc2z8" WorkloadEndpoint="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" Jan 23 17:59:57.026844 containerd[1996]: 2026-01-23 17:59:56.989 [INFO][4566] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Namespace="calico-system" Pod="whisker-68f8bc678b-vc2z8" WorkloadEndpoint="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0", GenerateName:"whisker-68f8bc678b-", Namespace:"calico-system", SelfLink:"", UID:"1e389e39-d560-4d57-90e1-c702cef458f5", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68f8bc678b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1", Pod:"whisker-68f8bc678b-vc2z8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.111.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali60153cecad7", MAC:"0e:0f:f4:66:e6:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:57.026951 containerd[1996]: 2026-01-23 17:59:57.014 [INFO][4566] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" Namespace="calico-system" Pod="whisker-68f8bc678b-vc2z8" WorkloadEndpoint="ip--172--31--24--204-k8s-whisker--68f8bc678b--vc2z8-eth0" Jan 23 17:59:57.063575 containerd[1996]: time="2026-01-23T17:59:57.063209365Z" level=info msg="connecting to shim 05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1" address="unix:///run/containerd/s/3f2aa677060e447d1e0deae0e5c993c56fa35e77ff16744b5c2d20a147c4a639" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:57.118787 systemd[1]: Started cri-containerd-05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1.scope - libcontainer container 05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1. Jan 23 17:59:57.310615 containerd[1996]: time="2026-01-23T17:59:57.310363011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68f8bc678b-vc2z8,Uid:1e389e39-d560-4d57-90e1-c702cef458f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"05e41a19162afeccc3125c7c34876d3244dcf469c6456c02c045cf686ca5ccf1\"" Jan 23 17:59:57.317170 containerd[1996]: time="2026-01-23T17:59:57.317085363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 17:59:57.636023 containerd[1996]: time="2026-01-23T17:59:57.635868724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:57.637217 containerd[1996]: time="2026-01-23T17:59:57.637129144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 17:59:57.637372 containerd[1996]: time="2026-01-23T17:59:57.637178548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 17:59:57.637671 kubelet[3327]: E0123 17:59:57.637609 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:59:57.638167 kubelet[3327]: E0123 17:59:57.637699 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 17:59:57.646830 kubelet[3327]: E0123 17:59:57.646676 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:da5c2c6cce0043a9a7f6a53d26e0e21c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tkc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68f8bc678b-vc2z8_calico-system(1e389e39-d560-4d57-90e1-c702cef458f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:57.650880 containerd[1996]: time="2026-01-23T17:59:57.650805268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 17:59:57.857998 kubelet[3327]: I0123 17:59:57.857922 3327 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e94a1ef1-631e-4bed-b300-2c431484cc06" path="/var/lib/kubelet/pods/e94a1ef1-631e-4bed-b300-2c431484cc06/volumes" Jan 23 17:59:57.925023 containerd[1996]: time="2026-01-23T17:59:57.924867318Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 17:59:57.927555 containerd[1996]: time="2026-01-23T17:59:57.927231954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 17:59:57.927555 containerd[1996]: time="2026-01-23T17:59:57.927379110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 17:59:57.928825 kubelet[3327]: E0123 17:59:57.927802 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:59:57.928825 kubelet[3327]: E0123 17:59:57.927865 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 17:59:57.929035 kubelet[3327]: E0123 17:59:57.928025 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tkc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68f8bc678b-vc2z8_calico-system(1e389e39-d560-4d57-90e1-c702cef458f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 17:59:57.929947 kubelet[3327]: E0123 17:59:57.929758 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 17:59:58.122835 systemd-networkd[1894]: cali60153cecad7: Gained IPv6LL Jan 23 17:59:58.254213 kubelet[3327]: E0123 17:59:58.254138 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 17:59:58.672249 systemd-networkd[1894]: vxlan.calico: Link UP Jan 23 17:59:58.672265 systemd-networkd[1894]: vxlan.calico: Gained carrier Jan 23 17:59:58.772369 (udev-worker)[4493]: Network interface NamePolicy= disabled on kernel command line. Jan 23 17:59:58.853153 containerd[1996]: time="2026-01-23T17:59:58.852925926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w5lcf,Uid:ca2b2126-a623-4814-a5b3-b02ad64431ba,Namespace:kube-system,Attempt:0,}" Jan 23 17:59:59.184451 systemd-networkd[1894]: califad6faa540d: Link UP Jan 23 17:59:59.185997 systemd-networkd[1894]: califad6faa540d: Gained carrier Jan 23 17:59:59.218998 containerd[1996]: 2026-01-23 17:59:59.015 [INFO][4827] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0 coredns-668d6bf9bc- kube-system ca2b2126-a623-4814-a5b3-b02ad64431ba 830 0 2026-01-23 17:59:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-204 coredns-668d6bf9bc-w5lcf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califad6faa540d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w5lcf" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-" Jan 23 17:59:59.218998 containerd[1996]: 2026-01-23 17:59:59.015 [INFO][4827] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w5lcf" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" Jan 23 17:59:59.218998 containerd[1996]: 2026-01-23 17:59:59.083 [INFO][4838] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" HandleID="k8s-pod-network.51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Workload="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" Jan 23 17:59:59.221692 containerd[1996]: 2026-01-23 17:59:59.083 [INFO][4838] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" HandleID="k8s-pod-network.51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Workload="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c0fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-204", "pod":"coredns-668d6bf9bc-w5lcf", "timestamp":"2026-01-23 17:59:59.083156547 +0000 UTC"}, Hostname:"ip-172-31-24-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 17:59:59.221692 containerd[1996]: 2026-01-23 17:59:59.083 [INFO][4838] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 17:59:59.221692 containerd[1996]: 2026-01-23 17:59:59.083 [INFO][4838] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 17:59:59.221692 containerd[1996]: 2026-01-23 17:59:59.083 [INFO][4838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-204' Jan 23 17:59:59.221692 containerd[1996]: 2026-01-23 17:59:59.098 [INFO][4838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" host="ip-172-31-24-204" Jan 23 17:59:59.221692 containerd[1996]: 2026-01-23 17:59:59.105 [INFO][4838] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-204" Jan 23 17:59:59.221692 containerd[1996]: 2026-01-23 17:59:59.118 [INFO][4838] ipam/ipam.go 511: Trying affinity for 192.168.111.64/26 host="ip-172-31-24-204" Jan 23 17:59:59.221692 containerd[1996]: 2026-01-23 17:59:59.122 [INFO][4838] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 17:59:59.221692 containerd[1996]: 2026-01-23 17:59:59.144 [INFO][4838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 17:59:59.224537 containerd[1996]: 2026-01-23 17:59:59.144 [INFO][4838] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" host="ip-172-31-24-204" Jan 23 17:59:59.224537 containerd[1996]: 2026-01-23 17:59:59.148 [INFO][4838] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f Jan 23 17:59:59.224537 containerd[1996]: 2026-01-23 17:59:59.158 [INFO][4838] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" host="ip-172-31-24-204" Jan 23 17:59:59.224537 containerd[1996]: 2026-01-23 17:59:59.171 [INFO][4838] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.66/26] block=192.168.111.64/26 handle="k8s-pod-network.51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" host="ip-172-31-24-204" Jan 23 17:59:59.224537 containerd[1996]: 2026-01-23 17:59:59.171 [INFO][4838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.66/26] handle="k8s-pod-network.51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" host="ip-172-31-24-204" Jan 23 17:59:59.224537 containerd[1996]: 2026-01-23 17:59:59.171 [INFO][4838] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 17:59:59.224537 containerd[1996]: 2026-01-23 17:59:59.172 [INFO][4838] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.66/26] IPv6=[] ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" HandleID="k8s-pod-network.51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Workload="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" Jan 23 17:59:59.225125 containerd[1996]: 2026-01-23 17:59:59.177 [INFO][4827] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w5lcf" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ca2b2126-a623-4814-a5b3-b02ad64431ba", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"", Pod:"coredns-668d6bf9bc-w5lcf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califad6faa540d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:59.225125 containerd[1996]: 2026-01-23 17:59:59.178 [INFO][4827] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.66/32] ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w5lcf" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" Jan 23 17:59:59.225125 containerd[1996]: 2026-01-23 17:59:59.178 [INFO][4827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califad6faa540d ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w5lcf" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" Jan 23 17:59:59.225125 containerd[1996]: 2026-01-23 17:59:59.185 [INFO][4827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w5lcf" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" Jan 23 17:59:59.225125 containerd[1996]: 2026-01-23 17:59:59.188 [INFO][4827] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w5lcf" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ca2b2126-a623-4814-a5b3-b02ad64431ba", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f", Pod:"coredns-668d6bf9bc-w5lcf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califad6faa540d", MAC:"aa:cc:e5:c1:44:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 17:59:59.225125 containerd[1996]: 2026-01-23 17:59:59.211 [INFO][4827] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" Namespace="kube-system" Pod="coredns-668d6bf9bc-w5lcf" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--w5lcf-eth0" Jan 23 17:59:59.267319 kubelet[3327]: E0123 17:59:59.267222 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 17:59:59.281281 containerd[1996]: time="2026-01-23T17:59:59.281189896Z" level=info msg="connecting to shim 51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f" address="unix:///run/containerd/s/ca988ae28a6663d6804c105dc19d6d79ecbc15d8c0dd71699738ffd4019c8dee" namespace=k8s.io protocol=ttrpc version=3 Jan 23 17:59:59.374831 systemd[1]: Started cri-containerd-51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f.scope - libcontainer container 51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f. Jan 23 17:59:59.508301 containerd[1996]: time="2026-01-23T17:59:59.508181682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w5lcf,Uid:ca2b2126-a623-4814-a5b3-b02ad64431ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f\"" Jan 23 17:59:59.517152 containerd[1996]: time="2026-01-23T17:59:59.516625566Z" level=info msg="CreateContainer within sandbox \"51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 17:59:59.584597 containerd[1996]: time="2026-01-23T17:59:59.582124494Z" level=info msg="Container 0a025cc2eba759cecaa347da06708409a5904790053cecf6cd95ec97f9d681c0: CDI devices from CRI Config.CDIDevices: []" Jan 23 17:59:59.597785 containerd[1996]: time="2026-01-23T17:59:59.597416994Z" level=info msg="CreateContainer within sandbox \"51a6ee99be21c89511f145afb171f643bdd96cc09d10ef4a6714eef1d0117e1f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0a025cc2eba759cecaa347da06708409a5904790053cecf6cd95ec97f9d681c0\"" Jan 23 17:59:59.600545 containerd[1996]: time="2026-01-23T17:59:59.599204394Z" level=info msg="StartContainer for \"0a025cc2eba759cecaa347da06708409a5904790053cecf6cd95ec97f9d681c0\"" Jan 23 17:59:59.603484 containerd[1996]: time="2026-01-23T17:59:59.602814198Z" level=info msg="connecting to shim 0a025cc2eba759cecaa347da06708409a5904790053cecf6cd95ec97f9d681c0" address="unix:///run/containerd/s/ca988ae28a6663d6804c105dc19d6d79ecbc15d8c0dd71699738ffd4019c8dee" protocol=ttrpc version=3 Jan 23 17:59:59.661933 systemd[1]: Started cri-containerd-0a025cc2eba759cecaa347da06708409a5904790053cecf6cd95ec97f9d681c0.scope - libcontainer container 0a025cc2eba759cecaa347da06708409a5904790053cecf6cd95ec97f9d681c0. Jan 23 17:59:59.751219 containerd[1996]: time="2026-01-23T17:59:59.751140391Z" level=info msg="StartContainer for \"0a025cc2eba759cecaa347da06708409a5904790053cecf6cd95ec97f9d681c0\" returns successfully" Jan 23 17:59:59.850739 systemd-networkd[1894]: vxlan.calico: Gained IPv6LL Jan 23 17:59:59.864890 containerd[1996]: time="2026-01-23T17:59:59.864829567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8464998c88-nzr4p,Uid:f787ec8c-40de-479c-b75d-f3d24f6583cc,Namespace:calico-apiserver,Attempt:0,}" Jan 23 17:59:59.876857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705956636.mount: Deactivated successfully. Jan 23 18:00:00.367985 kubelet[3327]: I0123 18:00:00.366566 3327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w5lcf" podStartSLOduration=51.366539154 podStartE2EDuration="51.366539154s" podCreationTimestamp="2026-01-23 17:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:00:00.321208434 +0000 UTC m=+56.858595620" watchObservedRunningTime="2026-01-23 18:00:00.366539154 +0000 UTC m=+56.903926328" Jan 23 18:00:00.427921 systemd-networkd[1894]: cali7c99bc967ad: Link UP Jan 23 18:00:00.429800 systemd-networkd[1894]: cali7c99bc967ad: Gained carrier Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.059 [INFO][4952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0 calico-apiserver-8464998c88- calico-apiserver f787ec8c-40de-479c-b75d-f3d24f6583cc 831 0 2026-01-23 17:59:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8464998c88 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-204 calico-apiserver-8464998c88-nzr4p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7c99bc967ad [] [] }} ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-nzr4p" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.061 [INFO][4952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-nzr4p" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.198 [INFO][4975] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" HandleID="k8s-pod-network.d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Workload="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.199 [INFO][4975] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" HandleID="k8s-pod-network.d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Workload="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-204", "pod":"calico-apiserver-8464998c88-nzr4p", "timestamp":"2026-01-23 18:00:00.198853037 +0000 UTC"}, Hostname:"ip-172-31-24-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.199 [INFO][4975] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.199 [INFO][4975] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.199 [INFO][4975] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-204' Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.255 [INFO][4975] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" host="ip-172-31-24-204" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.301 [INFO][4975] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-204" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.350 [INFO][4975] ipam/ipam.go 511: Trying affinity for 192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.356 [INFO][4975] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.362 [INFO][4975] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.362 [INFO][4975] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" host="ip-172-31-24-204" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.371 [INFO][4975] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273 Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.387 [INFO][4975] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" host="ip-172-31-24-204" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.411 [INFO][4975] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.67/26] block=192.168.111.64/26 handle="k8s-pod-network.d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" host="ip-172-31-24-204" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.412 [INFO][4975] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.67/26] handle="k8s-pod-network.d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" host="ip-172-31-24-204" Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.412 [INFO][4975] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:00:00.459185 containerd[1996]: 2026-01-23 18:00:00.412 [INFO][4975] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.67/26] IPv6=[] ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" HandleID="k8s-pod-network.d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Workload="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" Jan 23 18:00:00.463020 containerd[1996]: 2026-01-23 18:00:00.418 [INFO][4952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-nzr4p" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0", GenerateName:"calico-apiserver-8464998c88-", Namespace:"calico-apiserver", SelfLink:"", UID:"f787ec8c-40de-479c-b75d-f3d24f6583cc", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8464998c88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"", Pod:"calico-apiserver-8464998c88-nzr4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c99bc967ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:00.463020 containerd[1996]: 2026-01-23 18:00:00.419 [INFO][4952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.67/32] ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-nzr4p" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" Jan 23 18:00:00.463020 containerd[1996]: 2026-01-23 18:00:00.419 [INFO][4952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c99bc967ad ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-nzr4p" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" Jan 23 18:00:00.463020 containerd[1996]: 2026-01-23 18:00:00.430 [INFO][4952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-nzr4p" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" Jan 23 18:00:00.463020 containerd[1996]: 2026-01-23 18:00:00.431 [INFO][4952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-nzr4p" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0", GenerateName:"calico-apiserver-8464998c88-", Namespace:"calico-apiserver", SelfLink:"", UID:"f787ec8c-40de-479c-b75d-f3d24f6583cc", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8464998c88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273", Pod:"calico-apiserver-8464998c88-nzr4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7c99bc967ad", MAC:"06:6c:70:14:dd:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:00.463020 containerd[1996]: 2026-01-23 18:00:00.452 [INFO][4952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-nzr4p" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--nzr4p-eth0" Jan 23 18:00:00.513407 containerd[1996]: time="2026-01-23T18:00:00.512697343Z" level=info msg="connecting to shim d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273" address="unix:///run/containerd/s/5079378ff4a992e60ccdfc06b246bd0bc1293126c07262b3783beb1d8615b81a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:00:00.603989 systemd[1]: Started cri-containerd-d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273.scope - libcontainer container d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273. Jan 23 18:00:00.683126 systemd-networkd[1894]: califad6faa540d: Gained IPv6LL Jan 23 18:00:00.738743 containerd[1996]: time="2026-01-23T18:00:00.738597416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8464998c88-nzr4p,Uid:f787ec8c-40de-479c-b75d-f3d24f6583cc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4bf2a506b63fc823b8ffed04c144cbf2b7c5e0db247e3f706e326f2f2a2e273\"" Jan 23 18:00:00.744724 containerd[1996]: time="2026-01-23T18:00:00.744678248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:00.851802 containerd[1996]: time="2026-01-23T18:00:00.851544884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f8ffc4dc-6h2ph,Uid:23974028-c047-4f8c-92ef-f4b897791230,Namespace:calico-system,Attempt:0,}" Jan 23 18:00:00.852639 containerd[1996]: time="2026-01-23T18:00:00.852465056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29vr,Uid:29357653-d3ec-4227-8eb2-15d81c3dcd98,Namespace:kube-system,Attempt:0,}" Jan 23 18:00:00.854392 containerd[1996]: time="2026-01-23T18:00:00.854318000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8464998c88-xdthn,Uid:01be8348-3893-401c-b7b7-ba407784cdaf,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:00:01.105410 systemd[1]: Started sshd@8-172.31.24.204:22-68.220.241.50:57194.service - OpenSSH per-connection server daemon (68.220.241.50:57194). Jan 23 18:00:01.393214 systemd-networkd[1894]: cali6c83ece6cc1: Link UP Jan 23 18:00:01.396223 systemd-networkd[1894]: cali6c83ece6cc1: Gained carrier Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.071 [INFO][5050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0 calico-kube-controllers-77f8ffc4dc- calico-system 23974028-c047-4f8c-92ef-f4b897791230 827 0 2026-01-23 17:59:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77f8ffc4dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-204 calico-kube-controllers-77f8ffc4dc-6h2ph eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6c83ece6cc1 [] [] }} ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Namespace="calico-system" Pod="calico-kube-controllers-77f8ffc4dc-6h2ph" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.071 [INFO][5050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Namespace="calico-system" Pod="calico-kube-controllers-77f8ffc4dc-6h2ph" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.252 [INFO][5091] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" HandleID="k8s-pod-network.f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Workload="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.256 [INFO][5091] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" HandleID="k8s-pod-network.f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Workload="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003300e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-204", "pod":"calico-kube-controllers-77f8ffc4dc-6h2ph", "timestamp":"2026-01-23 18:00:01.252614118 +0000 UTC"}, Hostname:"ip-172-31-24-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.256 [INFO][5091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.256 [INFO][5091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.256 [INFO][5091] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-204' Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.308 [INFO][5091] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" host="ip-172-31-24-204" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.323 [INFO][5091] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-204" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.333 [INFO][5091] ipam/ipam.go 511: Trying affinity for 192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.338 [INFO][5091] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.343 [INFO][5091] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.343 [INFO][5091] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" host="ip-172-31-24-204" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.350 [INFO][5091] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8 Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.358 [INFO][5091] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" host="ip-172-31-24-204" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.374 [INFO][5091] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.68/26] block=192.168.111.64/26 handle="k8s-pod-network.f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" host="ip-172-31-24-204" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.374 [INFO][5091] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.68/26] handle="k8s-pod-network.f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" host="ip-172-31-24-204" Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.375 [INFO][5091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:00:01.451699 containerd[1996]: 2026-01-23 18:00:01.375 [INFO][5091] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.68/26] IPv6=[] ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" HandleID="k8s-pod-network.f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Workload="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" Jan 23 18:00:01.454928 containerd[1996]: 2026-01-23 18:00:01.383 [INFO][5050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Namespace="calico-system" Pod="calico-kube-controllers-77f8ffc4dc-6h2ph" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0", GenerateName:"calico-kube-controllers-77f8ffc4dc-", Namespace:"calico-system", SelfLink:"", UID:"23974028-c047-4f8c-92ef-f4b897791230", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f8ffc4dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"", Pod:"calico-kube-controllers-77f8ffc4dc-6h2ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c83ece6cc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:01.454928 containerd[1996]: 2026-01-23 18:00:01.384 [INFO][5050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.68/32] ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Namespace="calico-system" Pod="calico-kube-controllers-77f8ffc4dc-6h2ph" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" Jan 23 18:00:01.454928 containerd[1996]: 2026-01-23 18:00:01.384 [INFO][5050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c83ece6cc1 ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Namespace="calico-system" Pod="calico-kube-controllers-77f8ffc4dc-6h2ph" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" Jan 23 18:00:01.454928 containerd[1996]: 2026-01-23 18:00:01.395 [INFO][5050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Namespace="calico-system" Pod="calico-kube-controllers-77f8ffc4dc-6h2ph" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" Jan 23 18:00:01.454928 containerd[1996]: 2026-01-23 18:00:01.400 [INFO][5050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Namespace="calico-system" Pod="calico-kube-controllers-77f8ffc4dc-6h2ph" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0", GenerateName:"calico-kube-controllers-77f8ffc4dc-", Namespace:"calico-system", SelfLink:"", UID:"23974028-c047-4f8c-92ef-f4b897791230", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f8ffc4dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8", Pod:"calico-kube-controllers-77f8ffc4dc-6h2ph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.111.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c83ece6cc1", MAC:"aa:a6:e8:c8:8c:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:01.454928 containerd[1996]: 2026-01-23 18:00:01.444 [INFO][5050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" Namespace="calico-system" Pod="calico-kube-controllers-77f8ffc4dc-6h2ph" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--kube--controllers--77f8ffc4dc--6h2ph-eth0" Jan 23 18:00:01.534174 containerd[1996]: time="2026-01-23T18:00:01.533964884Z" level=info msg="connecting to shim f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8" address="unix:///run/containerd/s/10ea52611a58b74150ab929581e6178cfa5d8e36c84d2b7828d5841dd1165654" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:00:01.578895 systemd-networkd[1894]: calibd77a6015d4: Link UP Jan 23 18:00:01.579313 systemd-networkd[1894]: calibd77a6015d4: Gained carrier Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.079 [INFO][5048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0 calico-apiserver-8464998c88- calico-apiserver 01be8348-3893-401c-b7b7-ba407784cdaf 828 0 2026-01-23 17:59:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8464998c88 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-204 calico-apiserver-8464998c88-xdthn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd77a6015d4 [] [] }} ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-xdthn" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.080 [INFO][5048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-xdthn" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.264 [INFO][5099] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" HandleID="k8s-pod-network.a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Workload="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.266 [INFO][5099] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" HandleID="k8s-pod-network.a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Workload="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000377a30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-204", "pod":"calico-apiserver-8464998c88-xdthn", "timestamp":"2026-01-23 18:00:01.264024354 +0000 UTC"}, Hostname:"ip-172-31-24-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.266 [INFO][5099] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.374 [INFO][5099] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.374 [INFO][5099] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-204' Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.412 [INFO][5099] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" host="ip-172-31-24-204" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.436 [INFO][5099] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-204" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.463 [INFO][5099] ipam/ipam.go 511: Trying affinity for 192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.470 [INFO][5099] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.482 [INFO][5099] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.483 [INFO][5099] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" host="ip-172-31-24-204" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.487 [INFO][5099] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2 Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.499 [INFO][5099] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" host="ip-172-31-24-204" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.537 [INFO][5099] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.69/26] block=192.168.111.64/26 handle="k8s-pod-network.a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" host="ip-172-31-24-204" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.538 [INFO][5099] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.69/26] handle="k8s-pod-network.a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" host="ip-172-31-24-204" Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.538 [INFO][5099] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:00:01.662302 containerd[1996]: 2026-01-23 18:00:01.538 [INFO][5099] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.69/26] IPv6=[] ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" HandleID="k8s-pod-network.a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Workload="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" Jan 23 18:00:01.663496 containerd[1996]: 2026-01-23 18:00:01.551 [INFO][5048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-xdthn" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0", GenerateName:"calico-apiserver-8464998c88-", Namespace:"calico-apiserver", SelfLink:"", UID:"01be8348-3893-401c-b7b7-ba407784cdaf", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8464998c88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"", Pod:"calico-apiserver-8464998c88-xdthn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd77a6015d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:01.663496 containerd[1996]: 2026-01-23 18:00:01.553 [INFO][5048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.69/32] ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-xdthn" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" Jan 23 18:00:01.663496 containerd[1996]: 2026-01-23 18:00:01.554 [INFO][5048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd77a6015d4 ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-xdthn" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" Jan 23 18:00:01.663496 containerd[1996]: 2026-01-23 18:00:01.578 [INFO][5048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-xdthn" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" Jan 23 18:00:01.663496 containerd[1996]: 2026-01-23 18:00:01.612 [INFO][5048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-xdthn" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0", GenerateName:"calico-apiserver-8464998c88-", Namespace:"calico-apiserver", SelfLink:"", UID:"01be8348-3893-401c-b7b7-ba407784cdaf", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8464998c88", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2", Pod:"calico-apiserver-8464998c88-xdthn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.111.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd77a6015d4", MAC:"16:04:e7:bb:61:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:01.663496 containerd[1996]: 2026-01-23 18:00:01.645 [INFO][5048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" Namespace="calico-apiserver" Pod="calico-apiserver-8464998c88-xdthn" WorkloadEndpoint="ip--172--31--24--204-k8s-calico--apiserver--8464998c88--xdthn-eth0" Jan 23 18:00:01.663809 systemd[1]: Started cri-containerd-f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8.scope - libcontainer container f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8. Jan 23 18:00:01.749539 containerd[1996]: time="2026-01-23T18:00:01.748668969Z" level=info msg="connecting to shim a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2" address="unix:///run/containerd/s/734d5d6098e281587aaa5b42e0c8808ad0953371b70b1b79e5220490382856aa" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:00:01.763589 sshd[5089]: Accepted publickey for core from 68.220.241.50 port 57194 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:01.768298 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:01.788878 systemd-networkd[1894]: cali8bc04304140: Link UP Jan 23 18:00:01.795236 systemd-logind[1978]: New session 8 of user core. Jan 23 18:00:01.812235 systemd-networkd[1894]: cali8bc04304140: Gained carrier Jan 23 18:00:01.833085 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:00:01.854171 containerd[1996]: time="2026-01-23T18:00:01.853984437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzd8d,Uid:bd861cd6-0ac7-4fc8-b917-14516a6e2c66,Namespace:calico-system,Attempt:0,}" Jan 23 18:00:01.899411 systemd-networkd[1894]: cali7c99bc967ad: Gained IPv6LL Jan 23 18:00:01.906751 containerd[1996]: time="2026-01-23T18:00:01.906683686Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:01.909660 containerd[1996]: time="2026-01-23T18:00:01.909582142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:01.909993 containerd[1996]: time="2026-01-23T18:00:01.909958366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:01.911227 kubelet[3327]: E0123 18:00:01.911169 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:01.912800 kubelet[3327]: E0123 18:00:01.912186 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:01.914674 kubelet[3327]: E0123 18:00:01.914033 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z46r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8464998c88-nzr4p_calico-apiserver(f787ec8c-40de-479c-b75d-f3d24f6583cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:01.918908 kubelet[3327]: E0123 18:00:01.918831 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.086 [INFO][5049] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0 coredns-668d6bf9bc- kube-system 29357653-d3ec-4227-8eb2-15d81c3dcd98 820 0 2026-01-23 17:59:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-204 coredns-668d6bf9bc-k29vr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8bc04304140 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29vr" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.087 [INFO][5049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29vr" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.321 [INFO][5093] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" HandleID="k8s-pod-network.863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Workload="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.321 [INFO][5093] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" HandleID="k8s-pod-network.863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Workload="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f2520), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-204", "pod":"coredns-668d6bf9bc-k29vr", "timestamp":"2026-01-23 18:00:01.321330403 +0000 UTC"}, Hostname:"ip-172-31-24-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.321 [INFO][5093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.539 [INFO][5093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.539 [INFO][5093] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-204' Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.596 [INFO][5093] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" host="ip-172-31-24-204" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.630 [INFO][5093] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-204" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.678 [INFO][5093] ipam/ipam.go 511: Trying affinity for 192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.684 [INFO][5093] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.694 [INFO][5093] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.695 [INFO][5093] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" host="ip-172-31-24-204" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.700 [INFO][5093] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723 Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.716 [INFO][5093] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" host="ip-172-31-24-204" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.754 [INFO][5093] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.70/26] block=192.168.111.64/26 handle="k8s-pod-network.863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" host="ip-172-31-24-204" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.755 [INFO][5093] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.70/26] handle="k8s-pod-network.863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" host="ip-172-31-24-204" Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.756 [INFO][5093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:00:01.930806 containerd[1996]: 2026-01-23 18:00:01.757 [INFO][5093] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.70/26] IPv6=[] ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" HandleID="k8s-pod-network.863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Workload="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" Jan 23 18:00:01.932322 containerd[1996]: 2026-01-23 18:00:01.773 [INFO][5049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29vr" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29357653-d3ec-4227-8eb2-15d81c3dcd98", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"", Pod:"coredns-668d6bf9bc-k29vr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bc04304140", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:01.932322 containerd[1996]: 2026-01-23 18:00:01.774 [INFO][5049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.70/32] ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29vr" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" Jan 23 18:00:01.932322 containerd[1996]: 2026-01-23 18:00:01.774 [INFO][5049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8bc04304140 ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29vr" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" Jan 23 18:00:01.932322 containerd[1996]: 2026-01-23 18:00:01.811 [INFO][5049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29vr" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" Jan 23 18:00:01.932322 containerd[1996]: 2026-01-23 18:00:01.822 [INFO][5049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29vr" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"29357653-d3ec-4227-8eb2-15d81c3dcd98", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723", Pod:"coredns-668d6bf9bc-k29vr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.111.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8bc04304140", MAC:"4e:63:a5:f1:d3:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:01.932322 containerd[1996]: 2026-01-23 18:00:01.898 [INFO][5049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" Namespace="kube-system" Pod="coredns-668d6bf9bc-k29vr" WorkloadEndpoint="ip--172--31--24--204-k8s-coredns--668d6bf9bc--k29vr-eth0" Jan 23 18:00:01.937867 systemd[1]: Started cri-containerd-a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2.scope - libcontainer container a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2. Jan 23 18:00:02.034118 containerd[1996]: time="2026-01-23T18:00:02.034052406Z" level=info msg="connecting to shim 863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723" address="unix:///run/containerd/s/fb8e92fedb65e2d8be3b77b618e69eb431581e9d000c916b749c7af9697b35af" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:00:02.158856 systemd[1]: Started cri-containerd-863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723.scope - libcontainer container 863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723. Jan 23 18:00:02.288547 kubelet[3327]: E0123 18:00:02.288187 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:00:02.404924 containerd[1996]: time="2026-01-23T18:00:02.404854532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f8ffc4dc-6h2ph,Uid:23974028-c047-4f8c-92ef-f4b897791230,Namespace:calico-system,Attempt:0,} returns sandbox id \"f62ffcd7fc2b991544c01d8ba0f1adc7dd6a3dedfe2eb901abdae30bb98e54c8\"" Jan 23 18:00:02.412548 containerd[1996]: time="2026-01-23T18:00:02.411349280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:00:02.606719 sshd[5199]: Connection closed by 68.220.241.50 port 57194 Jan 23 18:00:02.607796 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:02.622794 systemd-networkd[1894]: cali61bc3567bae: Link UP Jan 23 18:00:02.625365 systemd-networkd[1894]: cali61bc3567bae: Gained carrier Jan 23 18:00:02.631295 systemd[1]: sshd@8-172.31.24.204:22-68.220.241.50:57194.service: Deactivated successfully. Jan 23 18:00:02.638024 containerd[1996]: time="2026-01-23T18:00:02.635413245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k29vr,Uid:29357653-d3ec-4227-8eb2-15d81c3dcd98,Namespace:kube-system,Attempt:0,} returns sandbox id \"863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723\"" Jan 23 18:00:02.640084 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:00:02.648811 systemd-logind[1978]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:00:02.658785 systemd-logind[1978]: Removed session 8. Jan 23 18:00:02.659437 containerd[1996]: time="2026-01-23T18:00:02.659371317Z" level=info msg="CreateContainer within sandbox \"863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:00:02.689191 containerd[1996]: time="2026-01-23T18:00:02.688144449Z" level=info msg="Container 036b0628bffc55b6ad3793c3cc6fa8b0c04260aad1e1eb338dcb40d0378862cc: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:00:02.705806 containerd[1996]: time="2026-01-23T18:00:02.705743361Z" level=info msg="CreateContainer within sandbox \"863296c0f66cb87c05dff361ac04ea6dff2710d4d58955095cb0908feeb5e723\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"036b0628bffc55b6ad3793c3cc6fa8b0c04260aad1e1eb338dcb40d0378862cc\"" Jan 23 18:00:02.707143 containerd[1996]: time="2026-01-23T18:00:02.707072697Z" level=info msg="StartContainer for \"036b0628bffc55b6ad3793c3cc6fa8b0c04260aad1e1eb338dcb40d0378862cc\"" Jan 23 18:00:02.709617 containerd[1996]: time="2026-01-23T18:00:02.709560334Z" level=info msg="connecting to shim 036b0628bffc55b6ad3793c3cc6fa8b0c04260aad1e1eb338dcb40d0378862cc" address="unix:///run/containerd/s/fb8e92fedb65e2d8be3b77b618e69eb431581e9d000c916b749c7af9697b35af" protocol=ttrpc version=3 Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.114 [INFO][5206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0 csi-node-driver- calico-system bd861cd6-0ac7-4fc8-b917-14516a6e2c66 722 0 2026-01-23 17:59:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-204 csi-node-driver-wzd8d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali61bc3567bae [] [] }} ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Namespace="calico-system" Pod="csi-node-driver-wzd8d" WorkloadEndpoint="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.115 [INFO][5206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Namespace="calico-system" Pod="csi-node-driver-wzd8d" WorkloadEndpoint="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.319 [INFO][5279] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" HandleID="k8s-pod-network.264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Workload="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.325 [INFO][5279] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" HandleID="k8s-pod-network.264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Workload="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001237a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-204", "pod":"csi-node-driver-wzd8d", "timestamp":"2026-01-23 18:00:02.319714256 +0000 UTC"}, Hostname:"ip-172-31-24-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.325 [INFO][5279] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.328 [INFO][5279] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.330 [INFO][5279] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-204' Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.441 [INFO][5279] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" host="ip-172-31-24-204" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.511 [INFO][5279] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-204" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.526 [INFO][5279] ipam/ipam.go 511: Trying affinity for 192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.532 [INFO][5279] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.540 [INFO][5279] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.540 [INFO][5279] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" host="ip-172-31-24-204" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.544 [INFO][5279] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.552 [INFO][5279] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" host="ip-172-31-24-204" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.585 [INFO][5279] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.71/26] block=192.168.111.64/26 handle="k8s-pod-network.264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" host="ip-172-31-24-204" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.585 [INFO][5279] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.71/26] handle="k8s-pod-network.264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" host="ip-172-31-24-204" Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.585 [INFO][5279] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:00:02.750705 containerd[1996]: 2026-01-23 18:00:02.585 [INFO][5279] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.71/26] IPv6=[] ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" HandleID="k8s-pod-network.264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Workload="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" Jan 23 18:00:02.751985 containerd[1996]: 2026-01-23 18:00:02.600 [INFO][5206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Namespace="calico-system" Pod="csi-node-driver-wzd8d" WorkloadEndpoint="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd861cd6-0ac7-4fc8-b917-14516a6e2c66", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"", Pod:"csi-node-driver-wzd8d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61bc3567bae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:02.751985 containerd[1996]: 2026-01-23 18:00:02.601 [INFO][5206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.71/32] ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Namespace="calico-system" Pod="csi-node-driver-wzd8d" WorkloadEndpoint="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" Jan 23 18:00:02.751985 containerd[1996]: 2026-01-23 18:00:02.601 [INFO][5206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61bc3567bae ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Namespace="calico-system" Pod="csi-node-driver-wzd8d" WorkloadEndpoint="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" Jan 23 18:00:02.751985 containerd[1996]: 2026-01-23 18:00:02.640 [INFO][5206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Namespace="calico-system" Pod="csi-node-driver-wzd8d" WorkloadEndpoint="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" Jan 23 18:00:02.751985 containerd[1996]: 2026-01-23 18:00:02.647 [INFO][5206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Namespace="calico-system" Pod="csi-node-driver-wzd8d" WorkloadEndpoint="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd861cd6-0ac7-4fc8-b917-14516a6e2c66", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b", Pod:"csi-node-driver-wzd8d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.111.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali61bc3567bae", MAC:"1e:9a:a1:b9:cd:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:02.751985 containerd[1996]: 2026-01-23 18:00:02.739 [INFO][5206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" Namespace="calico-system" Pod="csi-node-driver-wzd8d" WorkloadEndpoint="ip--172--31--24--204-k8s-csi--node--driver--wzd8d-eth0" Jan 23 18:00:02.767008 systemd[1]: Started cri-containerd-036b0628bffc55b6ad3793c3cc6fa8b0c04260aad1e1eb338dcb40d0378862cc.scope - libcontainer container 036b0628bffc55b6ad3793c3cc6fa8b0c04260aad1e1eb338dcb40d0378862cc. Jan 23 18:00:02.820994 containerd[1996]: time="2026-01-23T18:00:02.820866166Z" level=info msg="connecting to shim 264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b" address="unix:///run/containerd/s/331e164502c30284b3925a27022ab777ee8b36951715165e2be9ba2913faea25" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:00:02.923011 systemd-networkd[1894]: cali8bc04304140: Gained IPv6LL Jan 23 18:00:02.966929 containerd[1996]: time="2026-01-23T18:00:02.966822515Z" level=info msg="StartContainer for \"036b0628bffc55b6ad3793c3cc6fa8b0c04260aad1e1eb338dcb40d0378862cc\" returns successfully" Jan 23 18:00:02.977294 systemd[1]: Started cri-containerd-264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b.scope - libcontainer container 264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b. Jan 23 18:00:02.998554 containerd[1996]: time="2026-01-23T18:00:02.997411127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8464998c88-xdthn,Uid:01be8348-3893-401c-b7b7-ba407784cdaf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a2f2aedfb7a09b90bda04c72df0117d30fb3e39c8d660d048351d9bbf9028fe2\"" Jan 23 18:00:03.065651 containerd[1996]: time="2026-01-23T18:00:03.065487859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzd8d,Uid:bd861cd6-0ac7-4fc8-b917-14516a6e2c66,Namespace:calico-system,Attempt:0,} returns sandbox id \"264a00dc4720aefe0d820686f230375f284b5aa855b62917746852e26a3a1e7b\"" Jan 23 18:00:03.306750 systemd-networkd[1894]: cali6c83ece6cc1: Gained IPv6LL Jan 23 18:00:03.307189 systemd-networkd[1894]: calibd77a6015d4: Gained IPv6LL Jan 23 18:00:03.337858 kubelet[3327]: I0123 18:00:03.337645 3327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k29vr" podStartSLOduration=54.337586985 podStartE2EDuration="54.337586985s" podCreationTimestamp="2026-01-23 17:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:00:03.334246689 +0000 UTC m=+59.871633887" watchObservedRunningTime="2026-01-23 18:00:03.337586985 +0000 UTC m=+59.874974159" Jan 23 18:00:03.640908 containerd[1996]: time="2026-01-23T18:00:03.640722934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:03.643441 containerd[1996]: time="2026-01-23T18:00:03.643356958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:00:03.643583 containerd[1996]: time="2026-01-23T18:00:03.643388014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:00:03.643901 kubelet[3327]: E0123 18:00:03.643826 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:03.643980 kubelet[3327]: E0123 18:00:03.643900 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:03.644302 kubelet[3327]: E0123 18:00:03.644201 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgtnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77f8ffc4dc-6h2ph_calico-system(23974028-c047-4f8c-92ef-f4b897791230): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:03.645647 kubelet[3327]: E0123 18:00:03.645455 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:00:03.645847 containerd[1996]: time="2026-01-23T18:00:03.645694150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:03.854261 containerd[1996]: time="2026-01-23T18:00:03.854196275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-82cnj,Uid:169e9a61-dd6f-4dcb-a857-0adba680dfb0,Namespace:calico-system,Attempt:0,}" Jan 23 18:00:03.883690 systemd-networkd[1894]: cali61bc3567bae: Gained IPv6LL Jan 23 18:00:04.204997 systemd-networkd[1894]: cali28de6e27204: Link UP Jan 23 18:00:04.206833 systemd-networkd[1894]: cali28de6e27204: Gained carrier Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:03.993 [INFO][5408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0 goldmane-666569f655- calico-system 169e9a61-dd6f-4dcb-a857-0adba680dfb0 832 0 2026-01-23 17:59:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-204 goldmane-666569f655-82cnj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali28de6e27204 [] [] }} ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Namespace="calico-system" Pod="goldmane-666569f655-82cnj" WorkloadEndpoint="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:03.993 [INFO][5408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Namespace="calico-system" Pod="goldmane-666569f655-82cnj" WorkloadEndpoint="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.070 [INFO][5421] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" HandleID="k8s-pod-network.addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Workload="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.071 [INFO][5421] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" HandleID="k8s-pod-network.addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Workload="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003338a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-204", "pod":"goldmane-666569f655-82cnj", "timestamp":"2026-01-23 18:00:04.070881608 +0000 UTC"}, Hostname:"ip-172-31-24-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.071 [INFO][5421] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.071 [INFO][5421] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.071 [INFO][5421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-204' Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.088 [INFO][5421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" host="ip-172-31-24-204" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.097 [INFO][5421] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-204" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.111 [INFO][5421] ipam/ipam.go 511: Trying affinity for 192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.116 [INFO][5421] ipam/ipam.go 158: Attempting to load block cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.123 [INFO][5421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.111.64/26 host="ip-172-31-24-204" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.124 [INFO][5421] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.111.64/26 handle="k8s-pod-network.addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" host="ip-172-31-24-204" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.126 [INFO][5421] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207 Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.147 [INFO][5421] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.111.64/26 handle="k8s-pod-network.addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" host="ip-172-31-24-204" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.190 [INFO][5421] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.111.72/26] block=192.168.111.64/26 handle="k8s-pod-network.addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" host="ip-172-31-24-204" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.191 [INFO][5421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.111.72/26] handle="k8s-pod-network.addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" host="ip-172-31-24-204" Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.191 [INFO][5421] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:00:04.242343 containerd[1996]: 2026-01-23 18:00:04.191 [INFO][5421] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.111.72/26] IPv6=[] ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" HandleID="k8s-pod-network.addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Workload="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" Jan 23 18:00:04.244484 containerd[1996]: 2026-01-23 18:00:04.196 [INFO][5408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Namespace="calico-system" Pod="goldmane-666569f655-82cnj" WorkloadEndpoint="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"169e9a61-dd6f-4dcb-a857-0adba680dfb0", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"", Pod:"goldmane-666569f655-82cnj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali28de6e27204", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:04.244484 containerd[1996]: 2026-01-23 18:00:04.197 [INFO][5408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.111.72/32] ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Namespace="calico-system" Pod="goldmane-666569f655-82cnj" WorkloadEndpoint="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" Jan 23 18:00:04.244484 containerd[1996]: 2026-01-23 18:00:04.197 [INFO][5408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28de6e27204 ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Namespace="calico-system" Pod="goldmane-666569f655-82cnj" WorkloadEndpoint="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" Jan 23 18:00:04.244484 containerd[1996]: 2026-01-23 18:00:04.204 [INFO][5408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Namespace="calico-system" Pod="goldmane-666569f655-82cnj" WorkloadEndpoint="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" Jan 23 18:00:04.244484 containerd[1996]: 2026-01-23 18:00:04.205 [INFO][5408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Namespace="calico-system" Pod="goldmane-666569f655-82cnj" WorkloadEndpoint="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"169e9a61-dd6f-4dcb-a857-0adba680dfb0", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 17, 59, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-204", ContainerID:"addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207", Pod:"goldmane-666569f655-82cnj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.111.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali28de6e27204", MAC:"6e:06:2e:a0:ec:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:00:04.244484 containerd[1996]: 2026-01-23 18:00:04.236 [INFO][5408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" Namespace="calico-system" Pod="goldmane-666569f655-82cnj" WorkloadEndpoint="ip--172--31--24--204-k8s-goldmane--666569f655--82cnj-eth0" Jan 23 18:00:04.320668 containerd[1996]: time="2026-01-23T18:00:04.319956189Z" level=info msg="connecting to shim addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207" address="unix:///run/containerd/s/a0498f3c02607925c64ac62c5fd00a383e58f530547727604e5c072194183e3d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:00:04.325859 kubelet[3327]: E0123 18:00:04.325229 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:00:04.395278 systemd[1]: Started cri-containerd-addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207.scope - libcontainer container addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207. Jan 23 18:00:04.574277 containerd[1996]: time="2026-01-23T18:00:04.574131875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-82cnj,Uid:169e9a61-dd6f-4dcb-a857-0adba680dfb0,Namespace:calico-system,Attempt:0,} returns sandbox id \"addb60986b01d405a797fd487ffdf1bee9e715319ed37fd7c6ad173a18027207\"" Jan 23 18:00:04.728633 containerd[1996]: time="2026-01-23T18:00:04.728573196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:04.731773 containerd[1996]: time="2026-01-23T18:00:04.731694216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:04.731964 containerd[1996]: time="2026-01-23T18:00:04.731719668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:04.732218 kubelet[3327]: E0123 18:00:04.732160 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:04.732739 kubelet[3327]: E0123 18:00:04.732229 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:04.732739 kubelet[3327]: E0123 18:00:04.732685 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp824,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8464998c88-xdthn_calico-apiserver(01be8348-3893-401c-b7b7-ba407784cdaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:04.733412 containerd[1996]: time="2026-01-23T18:00:04.733356372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:00:04.733949 kubelet[3327]: E0123 18:00:04.733888 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:00:05.327149 kubelet[3327]: E0123 18:00:05.326743 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:00:05.802973 systemd-networkd[1894]: cali28de6e27204: Gained IPv6LL Jan 23 18:00:06.033218 containerd[1996]: time="2026-01-23T18:00:06.032566282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:06.035403 containerd[1996]: time="2026-01-23T18:00:06.035324122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:00:06.035616 containerd[1996]: time="2026-01-23T18:00:06.035473126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:00:06.035948 kubelet[3327]: E0123 18:00:06.035858 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:06.037235 kubelet[3327]: E0123 18:00:06.036050 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:06.037235 kubelet[3327]: E0123 18:00:06.036365 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhbhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:06.038044 containerd[1996]: time="2026-01-23T18:00:06.037417306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:00:06.755821 containerd[1996]: time="2026-01-23T18:00:06.755740634Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:06.757013 containerd[1996]: time="2026-01-23T18:00:06.756936842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:00:06.757176 containerd[1996]: time="2026-01-23T18:00:06.756979094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:06.757643 kubelet[3327]: E0123 18:00:06.757328 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:00:06.757643 kubelet[3327]: E0123 18:00:06.757389 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:00:06.757830 kubelet[3327]: E0123 18:00:06.757687 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztspz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-82cnj_calico-system(169e9a61-dd6f-4dcb-a857-0adba680dfb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:06.758964 kubelet[3327]: E0123 18:00:06.758880 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:00:06.759361 containerd[1996]: time="2026-01-23T18:00:06.759225062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:00:07.331276 kubelet[3327]: E0123 18:00:07.331185 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:00:07.612615 containerd[1996]: time="2026-01-23T18:00:07.612202010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:07.613844 containerd[1996]: time="2026-01-23T18:00:07.613757594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:00:07.613938 containerd[1996]: time="2026-01-23T18:00:07.613903214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:00:07.614241 kubelet[3327]: E0123 18:00:07.614161 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:07.614330 kubelet[3327]: E0123 18:00:07.614255 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:07.614706 kubelet[3327]: E0123 18:00:07.614621 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhbhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:07.616444 kubelet[3327]: E0123 18:00:07.616214 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:00:07.714960 systemd[1]: Started sshd@9-172.31.24.204:22-68.220.241.50:45020.service - OpenSSH per-connection server daemon (68.220.241.50:45020). Jan 23 18:00:07.891483 ntpd[2197]: Listen normally on 6 vxlan.calico 192.168.111.64:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 6 vxlan.calico 192.168.111.64:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 7 cali60153cecad7 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 8 vxlan.calico [fe80::6497:59ff:feed:7835%5]:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 9 califad6faa540d [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 10 cali7c99bc967ad [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 11 cali6c83ece6cc1 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 12 calibd77a6015d4 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 13 cali8bc04304140 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 14 cali61bc3567bae [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 18:00:07.892007 ntpd[2197]: 23 Jan 18:00:07 ntpd[2197]: Listen normally on 15 cali28de6e27204 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 18:00:07.891635 ntpd[2197]: Listen normally on 7 cali60153cecad7 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 18:00:07.891684 ntpd[2197]: Listen normally on 8 vxlan.calico [fe80::6497:59ff:feed:7835%5]:123 Jan 23 18:00:07.891730 ntpd[2197]: Listen normally on 9 califad6faa540d [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 18:00:07.891774 ntpd[2197]: Listen normally on 10 cali7c99bc967ad [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 18:00:07.891819 ntpd[2197]: Listen normally on 11 cali6c83ece6cc1 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 18:00:07.891863 ntpd[2197]: Listen normally on 12 calibd77a6015d4 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 18:00:07.891908 ntpd[2197]: Listen normally on 13 cali8bc04304140 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 18:00:07.891965 ntpd[2197]: Listen normally on 14 cali61bc3567bae [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 18:00:07.892007 ntpd[2197]: Listen normally on 15 cali28de6e27204 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 18:00:08.272461 sshd[5501]: Accepted publickey for core from 68.220.241.50 port 45020 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:08.275151 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:08.285470 systemd-logind[1978]: New session 9 of user core. Jan 23 18:00:08.290839 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:00:08.336689 kubelet[3327]: E0123 18:00:08.336453 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:00:08.795990 sshd[5505]: Connection closed by 68.220.241.50 port 45020 Jan 23 18:00:08.796811 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:08.812248 systemd[1]: sshd@9-172.31.24.204:22-68.220.241.50:45020.service: Deactivated successfully. Jan 23 18:00:08.820289 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:00:08.826707 systemd-logind[1978]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:00:08.830721 systemd-logind[1978]: Removed session 9. Jan 23 18:00:13.893492 systemd[1]: Started sshd@10-172.31.24.204:22-68.220.241.50:41158.service - OpenSSH per-connection server daemon (68.220.241.50:41158). Jan 23 18:00:14.417638 sshd[5528]: Accepted publickey for core from 68.220.241.50 port 41158 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:14.419862 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:14.428586 systemd-logind[1978]: New session 10 of user core. Jan 23 18:00:14.442835 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:00:14.852779 containerd[1996]: time="2026-01-23T18:00:14.852189850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:14.904071 sshd[5531]: Connection closed by 68.220.241.50 port 41158 Jan 23 18:00:14.903157 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:14.917200 systemd[1]: sshd@10-172.31.24.204:22-68.220.241.50:41158.service: Deactivated successfully. Jan 23 18:00:14.926298 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:00:14.930690 systemd-logind[1978]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:00:14.935373 systemd-logind[1978]: Removed session 10. Jan 23 18:00:15.571406 containerd[1996]: time="2026-01-23T18:00:15.571201149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:15.573175 containerd[1996]: time="2026-01-23T18:00:15.573061365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:15.573175 containerd[1996]: time="2026-01-23T18:00:15.573133497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:15.573409 kubelet[3327]: E0123 18:00:15.573333 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:15.575627 kubelet[3327]: E0123 18:00:15.573429 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:15.575627 kubelet[3327]: E0123 18:00:15.575455 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z46r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8464998c88-nzr4p_calico-apiserver(f787ec8c-40de-479c-b75d-f3d24f6583cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:15.576487 containerd[1996]: time="2026-01-23T18:00:15.576305313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:00:15.577835 kubelet[3327]: E0123 18:00:15.577717 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:00:15.860469 containerd[1996]: time="2026-01-23T18:00:15.860295131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:15.863082 containerd[1996]: time="2026-01-23T18:00:15.862974839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:00:15.863271 containerd[1996]: time="2026-01-23T18:00:15.862995323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:00:15.863955 kubelet[3327]: E0123 18:00:15.863594 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:15.863955 kubelet[3327]: E0123 18:00:15.863747 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:15.864585 kubelet[3327]: E0123 18:00:15.864070 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgtnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77f8ffc4dc-6h2ph_calico-system(23974028-c047-4f8c-92ef-f4b897791230): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:15.865225 containerd[1996]: time="2026-01-23T18:00:15.864390251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:00:15.866189 kubelet[3327]: E0123 18:00:15.865933 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:00:20.010138 systemd[1]: Started sshd@11-172.31.24.204:22-68.220.241.50:41168.service - OpenSSH per-connection server daemon (68.220.241.50:41168). Jan 23 18:00:20.551667 sshd[5545]: Accepted publickey for core from 68.220.241.50 port 41168 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:20.554114 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:20.563840 systemd-logind[1978]: New session 11 of user core. Jan 23 18:00:20.574805 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:00:21.044607 sshd[5554]: Connection closed by 68.220.241.50 port 41168 Jan 23 18:00:21.045469 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:21.055159 systemd[1]: sshd@11-172.31.24.204:22-68.220.241.50:41168.service: Deactivated successfully. Jan 23 18:00:21.060246 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:00:21.063627 systemd-logind[1978]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:00:21.067996 systemd-logind[1978]: Removed session 11. Jan 23 18:00:21.141420 systemd[1]: Started sshd@12-172.31.24.204:22-68.220.241.50:41182.service - OpenSSH per-connection server daemon (68.220.241.50:41182). Jan 23 18:00:21.662326 sshd[5567]: Accepted publickey for core from 68.220.241.50 port 41182 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:21.663946 sshd-session[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:21.672270 systemd-logind[1978]: New session 12 of user core. Jan 23 18:00:21.687850 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:00:21.826519 containerd[1996]: time="2026-01-23T18:00:21.826436368Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:21.828791 containerd[1996]: time="2026-01-23T18:00:21.828714808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:00:21.828921 containerd[1996]: time="2026-01-23T18:00:21.828840880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:00:21.829216 kubelet[3327]: E0123 18:00:21.829139 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:00:21.829824 kubelet[3327]: E0123 18:00:21.829210 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:00:21.830730 kubelet[3327]: E0123 18:00:21.829740 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:da5c2c6cce0043a9a7f6a53d26e0e21c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tkc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68f8bc678b-vc2z8_calico-system(1e389e39-d560-4d57-90e1-c702cef458f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:21.830963 containerd[1996]: time="2026-01-23T18:00:21.830673232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:22.237813 sshd[5570]: Connection closed by 68.220.241.50 port 41182 Jan 23 18:00:22.238745 sshd-session[5567]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:22.247142 systemd[1]: sshd@12-172.31.24.204:22-68.220.241.50:41182.service: Deactivated successfully. Jan 23 18:00:22.253068 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:00:22.259226 systemd-logind[1978]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:00:22.262261 systemd-logind[1978]: Removed session 12. Jan 23 18:00:22.347378 systemd[1]: Started sshd@13-172.31.24.204:22-68.220.241.50:41198.service - OpenSSH per-connection server daemon (68.220.241.50:41198). Jan 23 18:00:22.911386 sshd[5582]: Accepted publickey for core from 68.220.241.50 port 41198 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:22.915112 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:22.924684 systemd-logind[1978]: New session 13 of user core. Jan 23 18:00:22.937805 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:00:23.434860 sshd[5585]: Connection closed by 68.220.241.50 port 41198 Jan 23 18:00:23.435816 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:23.443184 systemd[1]: sshd@13-172.31.24.204:22-68.220.241.50:41198.service: Deactivated successfully. Jan 23 18:00:23.447400 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:00:23.451433 systemd-logind[1978]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:00:23.454627 systemd-logind[1978]: Removed session 13. Jan 23 18:00:27.684536 containerd[1996]: time="2026-01-23T18:00:27.684445246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:27.687848 containerd[1996]: time="2026-01-23T18:00:27.687639706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:27.687848 containerd[1996]: time="2026-01-23T18:00:27.687719266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:27.688284 kubelet[3327]: E0123 18:00:27.688141 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:27.690309 kubelet[3327]: E0123 18:00:27.688251 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:27.690309 kubelet[3327]: E0123 18:00:27.688779 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp824,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8464998c88-xdthn_calico-apiserver(01be8348-3893-401c-b7b7-ba407784cdaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:27.691046 kubelet[3327]: E0123 18:00:27.690548 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:00:27.691728 containerd[1996]: time="2026-01-23T18:00:27.691668022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:00:27.854470 kubelet[3327]: E0123 18:00:27.854359 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:00:28.522202 systemd[1]: Started sshd@14-172.31.24.204:22-68.220.241.50:48770.service - OpenSSH per-connection server daemon (68.220.241.50:48770). Jan 23 18:00:29.055757 sshd[5628]: Accepted publickey for core from 68.220.241.50 port 48770 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:29.058733 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:29.068021 systemd-logind[1978]: New session 14 of user core. Jan 23 18:00:29.074797 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:00:29.540575 sshd[5631]: Connection closed by 68.220.241.50 port 48770 Jan 23 18:00:29.542786 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:29.550418 systemd[1]: sshd@14-172.31.24.204:22-68.220.241.50:48770.service: Deactivated successfully. Jan 23 18:00:29.554922 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:00:29.559121 systemd-logind[1978]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:00:29.561999 systemd-logind[1978]: Removed session 14. Jan 23 18:00:29.853589 kubelet[3327]: E0123 18:00:29.852976 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:00:30.113379 containerd[1996]: time="2026-01-23T18:00:30.112897630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:30.115813 containerd[1996]: time="2026-01-23T18:00:30.115726714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:00:30.115997 containerd[1996]: time="2026-01-23T18:00:30.115866598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:30.116613 kubelet[3327]: E0123 18:00:30.116229 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:00:30.116613 kubelet[3327]: E0123 18:00:30.116403 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:00:30.117492 kubelet[3327]: E0123 18:00:30.117197 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztspz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-82cnj_calico-system(169e9a61-dd6f-4dcb-a857-0adba680dfb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:30.118618 containerd[1996]: time="2026-01-23T18:00:30.117880978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:00:30.119252 kubelet[3327]: E0123 18:00:30.119186 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:00:31.684308 containerd[1996]: time="2026-01-23T18:00:31.684216373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:31.687784 containerd[1996]: time="2026-01-23T18:00:31.687632221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:00:31.687784 containerd[1996]: time="2026-01-23T18:00:31.687703129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:00:31.688424 kubelet[3327]: E0123 18:00:31.688282 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:00:31.689320 kubelet[3327]: E0123 18:00:31.688400 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:00:31.689320 kubelet[3327]: E0123 18:00:31.689300 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tkc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68f8bc678b-vc2z8_calico-system(1e389e39-d560-4d57-90e1-c702cef458f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:31.690556 containerd[1996]: time="2026-01-23T18:00:31.690448225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:00:31.691049 kubelet[3327]: E0123 18:00:31.690601 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 18:00:31.968367 containerd[1996]: time="2026-01-23T18:00:31.968211951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:31.971108 containerd[1996]: time="2026-01-23T18:00:31.970969155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:00:31.971108 containerd[1996]: time="2026-01-23T18:00:31.971057295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:00:31.971313 kubelet[3327]: E0123 18:00:31.971244 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:31.971382 kubelet[3327]: E0123 18:00:31.971304 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:31.972061 kubelet[3327]: E0123 18:00:31.971456 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhbhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:31.976344 containerd[1996]: time="2026-01-23T18:00:31.976259367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:00:32.249337 containerd[1996]: time="2026-01-23T18:00:32.249172572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:32.251490 containerd[1996]: time="2026-01-23T18:00:32.251402484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:00:32.251770 containerd[1996]: time="2026-01-23T18:00:32.251573232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:00:32.251976 kubelet[3327]: E0123 18:00:32.251882 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:32.252454 kubelet[3327]: E0123 18:00:32.251967 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:32.252454 kubelet[3327]: E0123 18:00:32.252172 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhbhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:32.253656 kubelet[3327]: E0123 18:00:32.253553 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:00:34.638388 systemd[1]: Started sshd@15-172.31.24.204:22-68.220.241.50:38670.service - OpenSSH per-connection server daemon (68.220.241.50:38670). Jan 23 18:00:35.160829 sshd[5648]: Accepted publickey for core from 68.220.241.50 port 38670 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:35.163040 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:35.172623 systemd-logind[1978]: New session 15 of user core. Jan 23 18:00:35.179799 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:00:35.673546 sshd[5651]: Connection closed by 68.220.241.50 port 38670 Jan 23 18:00:35.674394 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:35.681378 systemd[1]: sshd@15-172.31.24.204:22-68.220.241.50:38670.service: Deactivated successfully. Jan 23 18:00:35.687013 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:00:35.689446 systemd-logind[1978]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:00:35.693079 systemd-logind[1978]: Removed session 15. Jan 23 18:00:39.856132 containerd[1996]: time="2026-01-23T18:00:39.856052446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:40.734083 containerd[1996]: time="2026-01-23T18:00:40.734016226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:40.737397 containerd[1996]: time="2026-01-23T18:00:40.737267218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:40.737723 containerd[1996]: time="2026-01-23T18:00:40.737276422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:40.737803 kubelet[3327]: E0123 18:00:40.737756 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:40.738350 kubelet[3327]: E0123 18:00:40.737817 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:40.738350 kubelet[3327]: E0123 18:00:40.737985 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z46r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8464998c88-nzr4p_calico-apiserver(f787ec8c-40de-479c-b75d-f3d24f6583cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:40.739887 kubelet[3327]: E0123 18:00:40.739813 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:00:40.762745 systemd[1]: Started sshd@16-172.31.24.204:22-68.220.241.50:38684.service - OpenSSH per-connection server daemon (68.220.241.50:38684). Jan 23 18:00:41.306328 sshd[5674]: Accepted publickey for core from 68.220.241.50 port 38684 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:41.309340 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:41.318708 systemd-logind[1978]: New session 16 of user core. Jan 23 18:00:41.324797 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:00:41.818376 sshd[5679]: Connection closed by 68.220.241.50 port 38684 Jan 23 18:00:41.821760 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:41.831259 systemd[1]: sshd@16-172.31.24.204:22-68.220.241.50:38684.service: Deactivated successfully. Jan 23 18:00:41.835753 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:00:41.838636 systemd-logind[1978]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:00:41.842128 systemd-logind[1978]: Removed session 16. Jan 23 18:00:41.856075 kubelet[3327]: E0123 18:00:41.855986 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:00:41.858692 containerd[1996]: time="2026-01-23T18:00:41.857395284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:00:42.189979 containerd[1996]: time="2026-01-23T18:00:42.189824494Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:42.193437 containerd[1996]: time="2026-01-23T18:00:42.193263502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:00:42.193437 containerd[1996]: time="2026-01-23T18:00:42.193340914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:00:42.193826 kubelet[3327]: E0123 18:00:42.193612 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:42.193826 kubelet[3327]: E0123 18:00:42.193683 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:00:42.193972 kubelet[3327]: E0123 18:00:42.193882 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgtnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77f8ffc4dc-6h2ph_calico-system(23974028-c047-4f8c-92ef-f4b897791230): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:42.195716 kubelet[3327]: E0123 18:00:42.195492 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:00:43.853808 containerd[1996]: time="2026-01-23T18:00:43.853543778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:00:44.145573 containerd[1996]: time="2026-01-23T18:00:44.145375559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:44.148211 containerd[1996]: time="2026-01-23T18:00:44.148095575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:00:44.148563 containerd[1996]: time="2026-01-23T18:00:44.148347959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:00:44.148782 kubelet[3327]: E0123 18:00:44.148702 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:00:44.148782 kubelet[3327]: E0123 18:00:44.148769 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:00:44.149718 kubelet[3327]: E0123 18:00:44.148911 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:da5c2c6cce0043a9a7f6a53d26e0e21c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tkc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68f8bc678b-vc2z8_calico-system(1e389e39-d560-4d57-90e1-c702cef458f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:44.151734 kubelet[3327]: E0123 18:00:44.151641 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 18:00:45.854845 kubelet[3327]: E0123 18:00:45.854728 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:00:45.859282 kubelet[3327]: E0123 18:00:45.859039 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:00:46.912387 systemd[1]: Started sshd@17-172.31.24.204:22-68.220.241.50:58406.service - OpenSSH per-connection server daemon (68.220.241.50:58406). Jan 23 18:00:47.446701 sshd[5692]: Accepted publickey for core from 68.220.241.50 port 58406 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:47.450414 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:47.459100 systemd-logind[1978]: New session 17 of user core. Jan 23 18:00:47.468968 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:00:47.958409 sshd[5695]: Connection closed by 68.220.241.50 port 58406 Jan 23 18:00:47.962049 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:47.970595 systemd[1]: sshd@17-172.31.24.204:22-68.220.241.50:58406.service: Deactivated successfully. Jan 23 18:00:47.977357 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:00:47.980668 systemd-logind[1978]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:00:47.984286 systemd-logind[1978]: Removed session 17. Jan 23 18:00:53.056063 systemd[1]: Started sshd@18-172.31.24.204:22-68.220.241.50:43724.service - OpenSSH per-connection server daemon (68.220.241.50:43724). Jan 23 18:00:53.595907 sshd[5707]: Accepted publickey for core from 68.220.241.50 port 43724 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:53.598934 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:53.608718 systemd-logind[1978]: New session 18 of user core. Jan 23 18:00:53.616770 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:00:53.857983 kubelet[3327]: E0123 18:00:53.856946 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:00:53.862921 containerd[1996]: time="2026-01-23T18:00:53.862541220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:00:54.136622 sshd[5710]: Connection closed by 68.220.241.50 port 43724 Jan 23 18:00:54.136409 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Jan 23 18:00:54.144250 systemd[1]: sshd@18-172.31.24.204:22-68.220.241.50:43724.service: Deactivated successfully. Jan 23 18:00:54.150209 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:00:54.155473 systemd-logind[1978]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:00:54.162022 systemd-logind[1978]: Removed session 18. Jan 23 18:00:54.166318 containerd[1996]: time="2026-01-23T18:00:54.166128129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:54.168519 containerd[1996]: time="2026-01-23T18:00:54.168376461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:00:54.168519 containerd[1996]: time="2026-01-23T18:00:54.168487461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:54.169026 kubelet[3327]: E0123 18:00:54.168964 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:54.169223 kubelet[3327]: E0123 18:00:54.169176 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:00:54.169666 kubelet[3327]: E0123 18:00:54.169558 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp824,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8464998c88-xdthn_calico-apiserver(01be8348-3893-401c-b7b7-ba407784cdaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:54.171458 kubelet[3327]: E0123 18:00:54.171364 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:00:55.852376 kubelet[3327]: E0123 18:00:55.852301 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:00:56.851826 containerd[1996]: time="2026-01-23T18:00:56.851763002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:00:57.143038 containerd[1996]: time="2026-01-23T18:00:57.142870128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:57.145269 containerd[1996]: time="2026-01-23T18:00:57.145198752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:00:57.145369 containerd[1996]: time="2026-01-23T18:00:57.145335312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:00:57.145608 kubelet[3327]: E0123 18:00:57.145496 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:57.146106 kubelet[3327]: E0123 18:00:57.145644 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:00:57.146209 kubelet[3327]: E0123 18:00:57.145956 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhbhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:57.148999 containerd[1996]: time="2026-01-23T18:00:57.148848360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:00:57.452025 containerd[1996]: time="2026-01-23T18:00:57.451872913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:57.454151 containerd[1996]: time="2026-01-23T18:00:57.454074301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:00:57.454311 containerd[1996]: time="2026-01-23T18:00:57.454195057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:00:57.454596 kubelet[3327]: E0123 18:00:57.454547 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:57.455179 kubelet[3327]: E0123 18:00:57.454721 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:00:57.455179 kubelet[3327]: E0123 18:00:57.454885 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhbhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:57.456485 kubelet[3327]: E0123 18:00:57.456375 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:00:57.860225 containerd[1996]: time="2026-01-23T18:00:57.859886427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:00:58.164902 containerd[1996]: time="2026-01-23T18:00:58.164727193Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:58.167156 containerd[1996]: time="2026-01-23T18:00:58.167083225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:00:58.167680 containerd[1996]: time="2026-01-23T18:00:58.167125273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:00:58.167740 kubelet[3327]: E0123 18:00:58.167372 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:00:58.167740 kubelet[3327]: E0123 18:00:58.167446 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:00:58.169029 kubelet[3327]: E0123 18:00:58.168314 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztspz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-82cnj_calico-system(169e9a61-dd6f-4dcb-a857-0adba680dfb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:58.170173 kubelet[3327]: E0123 18:00:58.169647 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:00:58.853490 containerd[1996]: time="2026-01-23T18:00:58.853407568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:00:59.135658 containerd[1996]: time="2026-01-23T18:00:59.135471698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:00:59.137778 containerd[1996]: time="2026-01-23T18:00:59.137698358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:00:59.137900 containerd[1996]: time="2026-01-23T18:00:59.137827154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:00:59.138555 kubelet[3327]: E0123 18:00:59.138164 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:00:59.138555 kubelet[3327]: E0123 18:00:59.138236 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:00:59.138555 kubelet[3327]: E0123 18:00:59.138395 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tkc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68f8bc678b-vc2z8_calico-system(1e389e39-d560-4d57-90e1-c702cef458f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:00:59.140112 kubelet[3327]: E0123 18:00:59.140032 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 18:00:59.250867 systemd[1]: Started sshd@19-172.31.24.204:22-68.220.241.50:43736.service - OpenSSH per-connection server daemon (68.220.241.50:43736). Jan 23 18:00:59.811698 sshd[5747]: Accepted publickey for core from 68.220.241.50 port 43736 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:00:59.815088 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:00:59.825623 systemd-logind[1978]: New session 19 of user core. Jan 23 18:00:59.834855 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:01:00.380585 sshd[5750]: Connection closed by 68.220.241.50 port 43736 Jan 23 18:01:00.380432 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:00.393495 systemd[1]: sshd@19-172.31.24.204:22-68.220.241.50:43736.service: Deactivated successfully. Jan 23 18:01:00.400102 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:01:00.405410 systemd-logind[1978]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:01:00.409182 systemd-logind[1978]: Removed session 19. Jan 23 18:01:00.482169 systemd[1]: Started sshd@20-172.31.24.204:22-68.220.241.50:43748.service - OpenSSH per-connection server daemon (68.220.241.50:43748). Jan 23 18:01:01.052764 sshd[5762]: Accepted publickey for core from 68.220.241.50 port 43748 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:01.055091 sshd-session[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:01.063637 systemd-logind[1978]: New session 20 of user core. Jan 23 18:01:01.069789 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 18:01:01.770729 sshd[5766]: Connection closed by 68.220.241.50 port 43748 Jan 23 18:01:01.772063 sshd-session[5762]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:01.779205 systemd[1]: sshd@20-172.31.24.204:22-68.220.241.50:43748.service: Deactivated successfully. Jan 23 18:01:01.785831 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 18:01:01.790656 systemd-logind[1978]: Session 20 logged out. Waiting for processes to exit. Jan 23 18:01:01.793283 systemd-logind[1978]: Removed session 20. Jan 23 18:01:01.870531 systemd[1]: Started sshd@21-172.31.24.204:22-68.220.241.50:43760.service - OpenSSH per-connection server daemon (68.220.241.50:43760). Jan 23 18:01:02.446091 sshd[5776]: Accepted publickey for core from 68.220.241.50 port 43760 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:02.449338 sshd-session[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:02.461648 systemd-logind[1978]: New session 21 of user core. Jan 23 18:01:02.464768 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 18:01:03.826763 sshd[5779]: Connection closed by 68.220.241.50 port 43760 Jan 23 18:01:03.827268 sshd-session[5776]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:03.838893 systemd[1]: sshd@21-172.31.24.204:22-68.220.241.50:43760.service: Deactivated successfully. Jan 23 18:01:03.849011 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 18:01:03.856544 systemd-logind[1978]: Session 21 logged out. Waiting for processes to exit. Jan 23 18:01:03.861733 systemd-logind[1978]: Removed session 21. Jan 23 18:01:03.919962 systemd[1]: Started sshd@22-172.31.24.204:22-68.220.241.50:40298.service - OpenSSH per-connection server daemon (68.220.241.50:40298). Jan 23 18:01:04.454563 sshd[5798]: Accepted publickey for core from 68.220.241.50 port 40298 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:04.457173 sshd-session[5798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:04.466991 systemd-logind[1978]: New session 22 of user core. Jan 23 18:01:04.477784 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 18:01:05.220417 sshd[5801]: Connection closed by 68.220.241.50 port 40298 Jan 23 18:01:05.220938 sshd-session[5798]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:05.229036 systemd[1]: sshd@22-172.31.24.204:22-68.220.241.50:40298.service: Deactivated successfully. Jan 23 18:01:05.235232 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 18:01:05.240197 systemd-logind[1978]: Session 22 logged out. Waiting for processes to exit. Jan 23 18:01:05.243082 systemd-logind[1978]: Removed session 22. Jan 23 18:01:05.317072 systemd[1]: Started sshd@23-172.31.24.204:22-68.220.241.50:40308.service - OpenSSH per-connection server daemon (68.220.241.50:40308). Jan 23 18:01:05.843270 sshd[5811]: Accepted publickey for core from 68.220.241.50 port 40308 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:05.845953 sshd-session[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:05.860442 systemd-logind[1978]: New session 23 of user core. Jan 23 18:01:05.868910 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 18:01:05.874545 kubelet[3327]: E0123 18:01:05.870274 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:01:06.309957 sshd[5814]: Connection closed by 68.220.241.50 port 40308 Jan 23 18:01:06.310863 sshd-session[5811]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:06.318227 systemd[1]: sshd@23-172.31.24.204:22-68.220.241.50:40308.service: Deactivated successfully. Jan 23 18:01:06.323772 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 18:01:06.327430 systemd-logind[1978]: Session 23 logged out. Waiting for processes to exit. Jan 23 18:01:06.331102 systemd-logind[1978]: Removed session 23. Jan 23 18:01:07.857964 kubelet[3327]: E0123 18:01:07.857911 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:01:07.860338 kubelet[3327]: E0123 18:01:07.860059 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:01:11.411855 systemd[1]: Started sshd@24-172.31.24.204:22-68.220.241.50:40324.service - OpenSSH per-connection server daemon (68.220.241.50:40324). Jan 23 18:01:11.864239 kubelet[3327]: E0123 18:01:11.862772 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:01:11.864239 kubelet[3327]: E0123 18:01:11.862925 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:01:11.951533 sshd[5829]: Accepted publickey for core from 68.220.241.50 port 40324 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:11.953205 sshd-session[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:11.965672 systemd-logind[1978]: New session 24 of user core. Jan 23 18:01:11.972813 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 18:01:12.445588 sshd[5832]: Connection closed by 68.220.241.50 port 40324 Jan 23 18:01:12.447075 sshd-session[5829]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:12.455189 systemd[1]: sshd@24-172.31.24.204:22-68.220.241.50:40324.service: Deactivated successfully. Jan 23 18:01:12.465396 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 18:01:12.468989 systemd-logind[1978]: Session 24 logged out. Waiting for processes to exit. Jan 23 18:01:12.477553 systemd-logind[1978]: Removed session 24. Jan 23 18:01:12.857601 kubelet[3327]: E0123 18:01:12.857355 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 18:01:17.555016 systemd[1]: Started sshd@25-172.31.24.204:22-68.220.241.50:49288.service - OpenSSH per-connection server daemon (68.220.241.50:49288). Jan 23 18:01:18.128397 sshd[5846]: Accepted publickey for core from 68.220.241.50 port 49288 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:18.131442 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:18.140795 systemd-logind[1978]: New session 25 of user core. Jan 23 18:01:18.152468 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 18:01:18.678464 sshd[5849]: Connection closed by 68.220.241.50 port 49288 Jan 23 18:01:18.677405 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:18.686722 systemd[1]: sshd@25-172.31.24.204:22-68.220.241.50:49288.service: Deactivated successfully. Jan 23 18:01:18.694858 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 18:01:18.702092 systemd-logind[1978]: Session 25 logged out. Waiting for processes to exit. Jan 23 18:01:18.706229 systemd-logind[1978]: Removed session 25. Jan 23 18:01:20.851391 kubelet[3327]: E0123 18:01:20.851264 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:01:22.853823 kubelet[3327]: E0123 18:01:22.853758 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:01:22.856481 containerd[1996]: time="2026-01-23T18:01:22.855849952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:01:23.122270 containerd[1996]: time="2026-01-23T18:01:23.121989241Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:23.124360 containerd[1996]: time="2026-01-23T18:01:23.124231153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:01:23.126622 containerd[1996]: time="2026-01-23T18:01:23.124253065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:01:23.126741 kubelet[3327]: E0123 18:01:23.124665 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:01:23.126741 kubelet[3327]: E0123 18:01:23.124729 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:01:23.126741 kubelet[3327]: E0123 18:01:23.124905 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z46r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8464998c88-nzr4p_calico-apiserver(f787ec8c-40de-479c-b75d-f3d24f6583cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:23.127325 kubelet[3327]: E0123 18:01:23.127247 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:01:23.777927 systemd[1]: Started sshd@26-172.31.24.204:22-68.220.241.50:47374.service - OpenSSH per-connection server daemon (68.220.241.50:47374). Jan 23 18:01:23.861557 kubelet[3327]: E0123 18:01:23.859265 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:01:24.365569 sshd[5866]: Accepted publickey for core from 68.220.241.50 port 47374 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:24.369663 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:24.382616 systemd-logind[1978]: New session 26 of user core. Jan 23 18:01:24.389858 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 18:01:24.854204 containerd[1996]: time="2026-01-23T18:01:24.854124270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:01:24.936601 sshd[5869]: Connection closed by 68.220.241.50 port 47374 Jan 23 18:01:24.937544 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:24.948330 systemd[1]: sshd@26-172.31.24.204:22-68.220.241.50:47374.service: Deactivated successfully. Jan 23 18:01:24.955747 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 18:01:24.959153 systemd-logind[1978]: Session 26 logged out. Waiting for processes to exit. Jan 23 18:01:24.966872 systemd-logind[1978]: Removed session 26. Jan 23 18:01:25.145838 containerd[1996]: time="2026-01-23T18:01:25.145682355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:25.148147 containerd[1996]: time="2026-01-23T18:01:25.148035555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:01:25.148340 containerd[1996]: time="2026-01-23T18:01:25.148183947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:01:25.148641 kubelet[3327]: E0123 18:01:25.148576 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:01:25.150982 kubelet[3327]: E0123 18:01:25.148650 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:01:25.150982 kubelet[3327]: E0123 18:01:25.148806 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:da5c2c6cce0043a9a7f6a53d26e0e21c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5tkc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68f8bc678b-vc2z8_calico-system(1e389e39-d560-4d57-90e1-c702cef458f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:25.152340 kubelet[3327]: E0123 18:01:25.152201 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 18:01:26.853234 kubelet[3327]: E0123 18:01:26.853171 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:01:30.037941 systemd[1]: Started sshd@27-172.31.24.204:22-68.220.241.50:47382.service - OpenSSH per-connection server daemon (68.220.241.50:47382). Jan 23 18:01:30.616527 sshd[5911]: Accepted publickey for core from 68.220.241.50 port 47382 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:30.620581 sshd-session[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:30.633477 systemd-logind[1978]: New session 27 of user core. Jan 23 18:01:30.640910 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 18:01:31.186697 sshd[5914]: Connection closed by 68.220.241.50 port 47382 Jan 23 18:01:31.186577 sshd-session[5911]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:31.194945 systemd-logind[1978]: Session 27 logged out. Waiting for processes to exit. Jan 23 18:01:31.197004 systemd[1]: sshd@27-172.31.24.204:22-68.220.241.50:47382.service: Deactivated successfully. Jan 23 18:01:31.205817 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 18:01:31.214218 systemd-logind[1978]: Removed session 27. Jan 23 18:01:34.854059 containerd[1996]: time="2026-01-23T18:01:34.853991367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:01:35.153459 containerd[1996]: time="2026-01-23T18:01:35.152828305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:35.155833 containerd[1996]: time="2026-01-23T18:01:35.155740357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:01:35.157788 containerd[1996]: time="2026-01-23T18:01:35.155786005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:01:35.157960 kubelet[3327]: E0123 18:01:35.156228 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:01:35.157960 kubelet[3327]: E0123 18:01:35.156287 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:01:35.157960 kubelet[3327]: E0123 18:01:35.156452 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp824,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8464998c88-xdthn_calico-apiserver(01be8348-3893-401c-b7b7-ba407784cdaf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:35.158761 kubelet[3327]: E0123 18:01:35.158568 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:01:35.855533 containerd[1996]: time="2026-01-23T18:01:35.853479220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:01:36.113683 containerd[1996]: time="2026-01-23T18:01:36.113340817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:36.116074 containerd[1996]: time="2026-01-23T18:01:36.115837861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:01:36.116074 containerd[1996]: time="2026-01-23T18:01:36.115878097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:01:36.117121 kubelet[3327]: E0123 18:01:36.116435 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:01:36.117121 kubelet[3327]: E0123 18:01:36.116525 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:01:36.117121 kubelet[3327]: E0123 18:01:36.116704 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgtnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77f8ffc4dc-6h2ph_calico-system(23974028-c047-4f8c-92ef-f4b897791230): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:36.118890 kubelet[3327]: E0123 18:01:36.118810 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:01:36.274192 systemd[1]: Started sshd@28-172.31.24.204:22-68.220.241.50:39048.service - OpenSSH per-connection server daemon (68.220.241.50:39048). Jan 23 18:01:36.800935 sshd[5945]: Accepted publickey for core from 68.220.241.50 port 39048 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:36.809855 sshd-session[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:36.821141 systemd-logind[1978]: New session 28 of user core. Jan 23 18:01:36.831811 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 23 18:01:37.324558 sshd[5948]: Connection closed by 68.220.241.50 port 39048 Jan 23 18:01:37.325391 sshd-session[5945]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:37.334923 systemd-logind[1978]: Session 28 logged out. Waiting for processes to exit. Jan 23 18:01:37.336195 systemd[1]: sshd@28-172.31.24.204:22-68.220.241.50:39048.service: Deactivated successfully. Jan 23 18:01:37.341408 systemd[1]: session-28.scope: Deactivated successfully. Jan 23 18:01:37.346871 systemd-logind[1978]: Removed session 28. Jan 23 18:01:38.853236 kubelet[3327]: E0123 18:01:38.852254 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:01:38.855863 containerd[1996]: time="2026-01-23T18:01:38.855791407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:01:39.130980 containerd[1996]: time="2026-01-23T18:01:39.130240540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:39.132678 containerd[1996]: time="2026-01-23T18:01:39.132587824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:01:39.132913 containerd[1996]: time="2026-01-23T18:01:39.132647752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:01:39.133058 kubelet[3327]: E0123 18:01:39.132987 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:01:39.133143 kubelet[3327]: E0123 18:01:39.133067 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:01:39.133299 kubelet[3327]: E0123 18:01:39.133220 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhbhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:39.138529 containerd[1996]: time="2026-01-23T18:01:39.137870656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:01:39.430700 containerd[1996]: time="2026-01-23T18:01:39.430476666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:39.433374 containerd[1996]: time="2026-01-23T18:01:39.433143330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:01:39.433374 containerd[1996]: time="2026-01-23T18:01:39.433205598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:01:39.433613 kubelet[3327]: E0123 18:01:39.433449 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:01:39.433613 kubelet[3327]: E0123 18:01:39.433529 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:01:39.433750 kubelet[3327]: E0123 18:01:39.433682 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mhbhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wzd8d_calico-system(bd861cd6-0ac7-4fc8-b917-14516a6e2c66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:39.435307 kubelet[3327]: E0123 18:01:39.435207 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:01:39.856644 containerd[1996]: time="2026-01-23T18:01:39.856464944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:01:40.145879 containerd[1996]: time="2026-01-23T18:01:40.145712345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:40.147991 containerd[1996]: time="2026-01-23T18:01:40.147907985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:01:40.148114 containerd[1996]: time="2026-01-23T18:01:40.148046777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:01:40.148319 kubelet[3327]: E0123 18:01:40.148258 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:01:40.148845 kubelet[3327]: E0123 18:01:40.148328 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:01:40.148845 kubelet[3327]: E0123 18:01:40.148478 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tkc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-68f8bc678b-vc2z8_calico-system(1e389e39-d560-4d57-90e1-c702cef458f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:40.150383 kubelet[3327]: E0123 18:01:40.150295 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 18:01:40.852085 containerd[1996]: time="2026-01-23T18:01:40.851984721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:01:41.303490 containerd[1996]: time="2026-01-23T18:01:41.303419755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:01:41.305838 containerd[1996]: time="2026-01-23T18:01:41.305705179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:01:41.305838 containerd[1996]: time="2026-01-23T18:01:41.305726479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:01:41.306152 kubelet[3327]: E0123 18:01:41.306034 3327 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:01:41.307078 kubelet[3327]: E0123 18:01:41.306129 3327 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:01:41.307078 kubelet[3327]: E0123 18:01:41.306767 3327 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztspz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-82cnj_calico-system(169e9a61-dd6f-4dcb-a857-0adba680dfb0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:01:41.308107 kubelet[3327]: E0123 18:01:41.308035 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:01:42.417849 systemd[1]: Started sshd@29-172.31.24.204:22-68.220.241.50:39050.service - OpenSSH per-connection server daemon (68.220.241.50:39050). Jan 23 18:01:42.951076 sshd[5961]: Accepted publickey for core from 68.220.241.50 port 39050 ssh2: RSA SHA256:bT2W1VfOscVmSCRasYr+KxB4wnT28qHFQXmybiJGx88 Jan 23 18:01:42.956114 sshd-session[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:01:42.968076 systemd-logind[1978]: New session 29 of user core. Jan 23 18:01:42.977820 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 23 18:01:43.459674 sshd[5964]: Connection closed by 68.220.241.50 port 39050 Jan 23 18:01:43.460186 sshd-session[5961]: pam_unix(sshd:session): session closed for user core Jan 23 18:01:43.469355 systemd-logind[1978]: Session 29 logged out. Waiting for processes to exit. Jan 23 18:01:43.470022 systemd[1]: sshd@29-172.31.24.204:22-68.220.241.50:39050.service: Deactivated successfully. Jan 23 18:01:43.475881 systemd[1]: session-29.scope: Deactivated successfully. Jan 23 18:01:43.484161 systemd-logind[1978]: Removed session 29. Jan 23 18:01:46.852101 kubelet[3327]: E0123 18:01:46.852046 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:01:50.851472 kubelet[3327]: E0123 18:01:50.851321 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:01:50.852738 kubelet[3327]: E0123 18:01:50.852669 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:01:52.852330 kubelet[3327]: E0123 18:01:52.852002 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 18:01:52.853796 kubelet[3327]: E0123 18:01:52.853594 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:01:55.851808 kubelet[3327]: E0123 18:01:55.851697 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:01:57.356693 kubelet[3327]: E0123 18:01:57.356461 3327 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-204?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:01:57.851154 kubelet[3327]: E0123 18:01:57.851022 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:01:58.201329 systemd[1]: cri-containerd-85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8.scope: Deactivated successfully. Jan 23 18:01:58.204728 systemd[1]: cri-containerd-85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8.scope: Consumed 30.182s CPU time, 106.2M memory peak. Jan 23 18:01:58.209648 containerd[1996]: time="2026-01-23T18:01:58.209489219Z" level=info msg="received container exit event container_id:\"85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8\" id:\"85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8\" pid:3830 exit_status:1 exited_at:{seconds:1769191318 nanos:208405727}" Jan 23 18:01:58.259807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8-rootfs.mount: Deactivated successfully. Jan 23 18:01:58.690786 kubelet[3327]: I0123 18:01:58.690740 3327 scope.go:117] "RemoveContainer" containerID="85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8" Jan 23 18:01:58.695339 containerd[1996]: time="2026-01-23T18:01:58.694324766Z" level=info msg="CreateContainer within sandbox \"4a4b34df9c3066dcbf5bd50fed758c0c0ae970246e981c38a3d290c8470d1827\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 18:01:58.713565 containerd[1996]: time="2026-01-23T18:01:58.712354238Z" level=info msg="Container acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:01:58.726955 containerd[1996]: time="2026-01-23T18:01:58.726906350Z" level=info msg="CreateContainer within sandbox \"4a4b34df9c3066dcbf5bd50fed758c0c0ae970246e981c38a3d290c8470d1827\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630\"" Jan 23 18:01:58.728317 containerd[1996]: time="2026-01-23T18:01:58.728277422Z" level=info msg="StartContainer for \"acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630\"" Jan 23 18:01:58.730223 containerd[1996]: time="2026-01-23T18:01:58.730166438Z" level=info msg="connecting to shim acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630" address="unix:///run/containerd/s/01f8bb1d109acca590c471b1e2af41551e84b764814921a207dd1b8e2bed6865" protocol=ttrpc version=3 Jan 23 18:01:58.768815 systemd[1]: Started cri-containerd-acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630.scope - libcontainer container acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630. Jan 23 18:01:58.829707 containerd[1996]: time="2026-01-23T18:01:58.829618130Z" level=info msg="StartContainer for \"acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630\" returns successfully" Jan 23 18:01:58.879384 systemd[1]: cri-containerd-3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf.scope: Deactivated successfully. Jan 23 18:01:58.880255 systemd[1]: cri-containerd-3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf.scope: Consumed 6.566s CPU time, 58.8M memory peak. Jan 23 18:01:58.884844 containerd[1996]: time="2026-01-23T18:01:58.884789283Z" level=info msg="received container exit event container_id:\"3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf\" id:\"3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf\" pid:3151 exit_status:1 exited_at:{seconds:1769191318 nanos:884098167}" Jan 23 18:01:58.939689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf-rootfs.mount: Deactivated successfully. Jan 23 18:01:59.701800 kubelet[3327]: I0123 18:01:59.701744 3327 scope.go:117] "RemoveContainer" containerID="3ee1a2f56863f70e0bc189a877602249e3367c663216c59578187fc2fe7540bf" Jan 23 18:01:59.705912 containerd[1996]: time="2026-01-23T18:01:59.705864819Z" level=info msg="CreateContainer within sandbox \"6b6fe58d8ad26a22d2e1184ab7b13028d2dc6943a86d977d77bcc76ce3a6e6d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 18:01:59.728537 containerd[1996]: time="2026-01-23T18:01:59.727141179Z" level=info msg="Container 77a6c165d13626aab96313d48ccd3c7669f6c2de33532cbdba899a7cc785a5da: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:01:59.747852 containerd[1996]: time="2026-01-23T18:01:59.747741531Z" level=info msg="CreateContainer within sandbox \"6b6fe58d8ad26a22d2e1184ab7b13028d2dc6943a86d977d77bcc76ce3a6e6d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"77a6c165d13626aab96313d48ccd3c7669f6c2de33532cbdba899a7cc785a5da\"" Jan 23 18:01:59.748871 containerd[1996]: time="2026-01-23T18:01:59.748808391Z" level=info msg="StartContainer for \"77a6c165d13626aab96313d48ccd3c7669f6c2de33532cbdba899a7cc785a5da\"" Jan 23 18:01:59.752123 containerd[1996]: time="2026-01-23T18:01:59.752055771Z" level=info msg="connecting to shim 77a6c165d13626aab96313d48ccd3c7669f6c2de33532cbdba899a7cc785a5da" address="unix:///run/containerd/s/5c26172bcec5046831c619207faee62226ac0b21ac35f4f99a3d6e9e00d567dc" protocol=ttrpc version=3 Jan 23 18:01:59.788827 systemd[1]: Started cri-containerd-77a6c165d13626aab96313d48ccd3c7669f6c2de33532cbdba899a7cc785a5da.scope - libcontainer container 77a6c165d13626aab96313d48ccd3c7669f6c2de33532cbdba899a7cc785a5da. Jan 23 18:01:59.876997 containerd[1996]: time="2026-01-23T18:01:59.876928527Z" level=info msg="StartContainer for \"77a6c165d13626aab96313d48ccd3c7669f6c2de33532cbdba899a7cc785a5da\" returns successfully" Jan 23 18:02:01.853266 kubelet[3327]: E0123 18:02:01.851795 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:02:03.679293 systemd[1]: cri-containerd-a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387.scope: Deactivated successfully. Jan 23 18:02:03.680681 systemd[1]: cri-containerd-a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387.scope: Consumed 7.386s CPU time, 20.9M memory peak. Jan 23 18:02:03.688087 containerd[1996]: time="2026-01-23T18:02:03.687972690Z" level=info msg="received container exit event container_id:\"a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387\" id:\"a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387\" pid:3167 exit_status:1 exited_at:{seconds:1769191323 nanos:687469818}" Jan 23 18:02:03.739580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387-rootfs.mount: Deactivated successfully. Jan 23 18:02:03.853581 kubelet[3327]: E0123 18:02:03.853464 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 18:02:03.856562 kubelet[3327]: E0123 18:02:03.856304 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:02:04.727528 kubelet[3327]: I0123 18:02:04.727397 3327 scope.go:117] "RemoveContainer" containerID="a09ac3347d2a6ecf5d11a336debd6d2805b08e9e8850544adc41c1969cd1d387" Jan 23 18:02:04.731144 containerd[1996]: time="2026-01-23T18:02:04.731072732Z" level=info msg="CreateContainer within sandbox \"11d1b0d9286bbfb58550ada5f42454199ec4813640e151edfee83ad27ef9eff1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:02:04.749540 containerd[1996]: time="2026-01-23T18:02:04.748878620Z" level=info msg="Container f44f190fc88a7315e1ea5fbaa49a6e84b34c027ebe3c4a0bc931d57c6087daf7: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:02:04.769653 containerd[1996]: time="2026-01-23T18:02:04.769578956Z" level=info msg="CreateContainer within sandbox \"11d1b0d9286bbfb58550ada5f42454199ec4813640e151edfee83ad27ef9eff1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f44f190fc88a7315e1ea5fbaa49a6e84b34c027ebe3c4a0bc931d57c6087daf7\"" Jan 23 18:02:04.770368 containerd[1996]: time="2026-01-23T18:02:04.770304956Z" level=info msg="StartContainer for \"f44f190fc88a7315e1ea5fbaa49a6e84b34c027ebe3c4a0bc931d57c6087daf7\"" Jan 23 18:02:04.773156 containerd[1996]: time="2026-01-23T18:02:04.773087276Z" level=info msg="connecting to shim f44f190fc88a7315e1ea5fbaa49a6e84b34c027ebe3c4a0bc931d57c6087daf7" address="unix:///run/containerd/s/4fc8089bc03ca8188834c103e21cf41881060119824daf6d74831f42bacb9e89" protocol=ttrpc version=3 Jan 23 18:02:04.814820 systemd[1]: Started cri-containerd-f44f190fc88a7315e1ea5fbaa49a6e84b34c027ebe3c4a0bc931d57c6087daf7.scope - libcontainer container f44f190fc88a7315e1ea5fbaa49a6e84b34c027ebe3c4a0bc931d57c6087daf7. Jan 23 18:02:04.853363 kubelet[3327]: E0123 18:02:04.853256 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66" Jan 23 18:02:04.902814 containerd[1996]: time="2026-01-23T18:02:04.902768696Z" level=info msg="StartContainer for \"f44f190fc88a7315e1ea5fbaa49a6e84b34c027ebe3c4a0bc931d57c6087daf7\" returns successfully" Jan 23 18:02:07.357175 kubelet[3327]: E0123 18:02:07.357104 3327 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-204?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:02:10.364289 systemd[1]: cri-containerd-acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630.scope: Deactivated successfully. Jan 23 18:02:10.364842 containerd[1996]: time="2026-01-23T18:02:10.364422696Z" level=info msg="received container exit event container_id:\"acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630\" id:\"acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630\" pid:6025 exit_status:1 exited_at:{seconds:1769191330 nanos:363825492}" Jan 23 18:02:10.404954 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630-rootfs.mount: Deactivated successfully. Jan 23 18:02:10.755101 kubelet[3327]: I0123 18:02:10.754579 3327 scope.go:117] "RemoveContainer" containerID="85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8" Jan 23 18:02:10.756085 kubelet[3327]: I0123 18:02:10.755839 3327 scope.go:117] "RemoveContainer" containerID="acd72987f53e0c8a23ac8c36edcf65f0d6dac38c05751f6abf54a77c706a4630" Jan 23 18:02:10.756710 kubelet[3327]: E0123 18:02:10.756664 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-gv5kz_tigera-operator(ce7b547a-93ef-4717-9473-29c7037baa32)\"" pod="tigera-operator/tigera-operator-7dcd859c48-gv5kz" podUID="ce7b547a-93ef-4717-9473-29c7037baa32" Jan 23 18:02:10.760210 containerd[1996]: time="2026-01-23T18:02:10.759911354Z" level=info msg="RemoveContainer for \"85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8\"" Jan 23 18:02:10.769851 containerd[1996]: time="2026-01-23T18:02:10.769748678Z" level=info msg="RemoveContainer for \"85d1ec97f1e3fafa1213a856e95b2e18bd849750ffd60749146a9993a2e79fa8\" returns successfully" Jan 23 18:02:10.852144 kubelet[3327]: E0123 18:02:10.852067 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-xdthn" podUID="01be8348-3893-401c-b7b7-ba407784cdaf" Jan 23 18:02:10.852377 kubelet[3327]: E0123 18:02:10.852241 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-82cnj" podUID="169e9a61-dd6f-4dcb-a857-0adba680dfb0" Jan 23 18:02:14.852131 kubelet[3327]: E0123 18:02:14.852006 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-68f8bc678b-vc2z8" podUID="1e389e39-d560-4d57-90e1-c702cef458f5" Jan 23 18:02:14.852863 kubelet[3327]: E0123 18:02:14.852621 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8464998c88-nzr4p" podUID="f787ec8c-40de-479c-b75d-f3d24f6583cc" Jan 23 18:02:16.851440 kubelet[3327]: E0123 18:02:16.851313 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f8ffc4dc-6h2ph" podUID="23974028-c047-4f8c-92ef-f4b897791230" Jan 23 18:02:17.358554 kubelet[3327]: E0123 18:02:17.357760 3327 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-204?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 18:02:19.855680 kubelet[3327]: E0123 18:02:19.855323 3327 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wzd8d" podUID="bd861cd6-0ac7-4fc8-b917-14516a6e2c66"