Jan 15 23:48:08.142997 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 15 23:48:08.143040 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 15 22:06:59 -00 2026 Jan 15 23:48:08.143064 kernel: KASLR disabled due to lack of seed Jan 15 23:48:08.143080 kernel: efi: EFI v2.7 by EDK II Jan 15 23:48:08.143096 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 15 23:48:08.143112 kernel: secureboot: Secure boot disabled Jan 15 23:48:08.143129 kernel: ACPI: Early table checksum verification disabled Jan 15 23:48:08.143144 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 15 23:48:08.143160 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 15 23:48:08.143175 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 15 23:48:08.143191 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 15 23:48:08.143210 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 15 23:48:08.143277 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 15 23:48:08.143294 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 15 23:48:08.143313 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 15 23:48:08.143330 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 15 23:48:08.143351 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 15 23:48:08.143368 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 15 23:48:08.143384 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 15 23:48:08.143400 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 15 23:48:08.143416 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 15 23:48:08.143432 kernel: printk: legacy bootconsole [uart0] enabled Jan 15 23:48:08.143448 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 15 23:48:08.143464 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 15 23:48:08.143481 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 15 23:48:08.143497 kernel: Zone ranges: Jan 15 23:48:08.143513 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 15 23:48:08.143533 kernel: DMA32 empty Jan 15 23:48:08.143549 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 15 23:48:08.143564 kernel: Device empty Jan 15 23:48:08.143580 kernel: Movable zone start for each node Jan 15 23:48:08.143596 kernel: Early memory node ranges Jan 15 23:48:08.143612 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 15 23:48:08.143628 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 15 23:48:08.143644 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 15 23:48:08.143660 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 15 23:48:08.143675 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 15 23:48:08.143691 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 15 23:48:08.143707 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 15 23:48:08.143727 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 15 23:48:08.143749 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 15 23:48:08.143767 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 15 23:48:08.143784 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 15 23:48:08.143801 kernel: psci: probing for conduit method from ACPI. Jan 15 23:48:08.143821 kernel: psci: PSCIv1.0 detected in firmware. Jan 15 23:48:08.143838 kernel: psci: Using standard PSCI v0.2 function IDs Jan 15 23:48:08.143855 kernel: psci: Trusted OS migration not required Jan 15 23:48:08.143871 kernel: psci: SMC Calling Convention v1.1 Jan 15 23:48:08.143888 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 15 23:48:08.143905 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 15 23:48:08.143922 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 15 23:48:08.143939 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 15 23:48:08.143956 kernel: Detected PIPT I-cache on CPU0 Jan 15 23:48:08.143973 kernel: CPU features: detected: GIC system register CPU interface Jan 15 23:48:08.143989 kernel: CPU features: detected: Spectre-v2 Jan 15 23:48:08.144009 kernel: CPU features: detected: Spectre-v3a Jan 15 23:48:08.144026 kernel: CPU features: detected: Spectre-BHB Jan 15 23:48:08.144043 kernel: CPU features: detected: ARM erratum 1742098 Jan 15 23:48:08.144078 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 15 23:48:08.144098 kernel: alternatives: applying boot alternatives Jan 15 23:48:08.144118 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=83f7d443283b2e87b6283ab8b3252eb2d2356b218981a63efeb3e370fba6f971 Jan 15 23:48:08.144136 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 15 23:48:08.144153 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 15 23:48:08.144170 kernel: Fallback order for Node 0: 0 Jan 15 23:48:08.144187 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 15 23:48:08.144203 kernel: Policy zone: Normal Jan 15 23:48:08.145375 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 15 23:48:08.145402 kernel: software IO TLB: area num 2. Jan 15 23:48:08.145421 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Jan 15 23:48:08.145438 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 15 23:48:08.145455 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 15 23:48:08.145473 kernel: rcu: RCU event tracing is enabled. Jan 15 23:48:08.145491 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 15 23:48:08.145509 kernel: Trampoline variant of Tasks RCU enabled. Jan 15 23:48:08.145526 kernel: Tracing variant of Tasks RCU enabled. Jan 15 23:48:08.145543 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 15 23:48:08.145560 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 15 23:48:08.145585 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 23:48:08.145603 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 15 23:48:08.145622 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 15 23:48:08.145638 kernel: GICv3: 96 SPIs implemented Jan 15 23:48:08.145655 kernel: GICv3: 0 Extended SPIs implemented Jan 15 23:48:08.145672 kernel: Root IRQ handler: gic_handle_irq Jan 15 23:48:08.145689 kernel: GICv3: GICv3 features: 16 PPIs Jan 15 23:48:08.145706 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 15 23:48:08.145723 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 15 23:48:08.145740 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 15 23:48:08.145757 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 15 23:48:08.145776 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 15 23:48:08.145799 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 15 23:48:08.145816 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 15 23:48:08.145833 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 15 23:48:08.145850 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 15 23:48:08.145867 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 15 23:48:08.145886 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 15 23:48:08.145904 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 15 23:48:08.145922 kernel: Console: colour dummy device 80x25 Jan 15 23:48:08.145940 kernel: printk: legacy console [tty1] enabled Jan 15 23:48:08.145958 kernel: ACPI: Core revision 20240827 Jan 15 23:48:08.145976 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 15 23:48:08.145998 kernel: pid_max: default: 32768 minimum: 301 Jan 15 23:48:08.146016 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 15 23:48:08.146033 kernel: landlock: Up and running. Jan 15 23:48:08.146050 kernel: SELinux: Initializing. Jan 15 23:48:08.146068 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 23:48:08.146086 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 15 23:48:08.146103 kernel: rcu: Hierarchical SRCU implementation. Jan 15 23:48:08.146121 kernel: rcu: Max phase no-delay instances is 400. Jan 15 23:48:08.146143 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 15 23:48:08.146160 kernel: Remapping and enabling EFI services. Jan 15 23:48:08.146178 kernel: smp: Bringing up secondary CPUs ... Jan 15 23:48:08.146196 kernel: Detected PIPT I-cache on CPU1 Jan 15 23:48:08.146213 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 15 23:48:08.146261 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 15 23:48:08.146279 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 15 23:48:08.146297 kernel: smp: Brought up 1 node, 2 CPUs Jan 15 23:48:08.146314 kernel: SMP: Total of 2 processors activated. Jan 15 23:48:08.146338 kernel: CPU: All CPU(s) started at EL1 Jan 15 23:48:08.146366 kernel: CPU features: detected: 32-bit EL0 Support Jan 15 23:48:08.146384 kernel: CPU features: detected: 32-bit EL1 Support Jan 15 23:48:08.146405 kernel: CPU features: detected: CRC32 instructions Jan 15 23:48:08.146423 kernel: alternatives: applying system-wide alternatives Jan 15 23:48:08.146442 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Jan 15 23:48:08.146461 kernel: devtmpfs: initialized Jan 15 23:48:08.146480 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 15 23:48:08.146502 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 15 23:48:08.146520 kernel: 16880 pages in range for non-PLT usage Jan 15 23:48:08.146537 kernel: 508400 pages in range for PLT usage Jan 15 23:48:08.146555 kernel: pinctrl core: initialized pinctrl subsystem Jan 15 23:48:08.146573 kernel: SMBIOS 3.0.0 present. Jan 15 23:48:08.146591 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 15 23:48:08.146609 kernel: DMI: Memory slots populated: 0/0 Jan 15 23:48:08.146627 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 15 23:48:08.146645 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 15 23:48:08.146667 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 15 23:48:08.146685 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 15 23:48:08.146703 kernel: audit: initializing netlink subsys (disabled) Jan 15 23:48:08.146721 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jan 15 23:48:08.146739 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 15 23:48:08.146758 kernel: cpuidle: using governor menu Jan 15 23:48:08.146776 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 15 23:48:08.146794 kernel: ASID allocator initialised with 65536 entries Jan 15 23:48:08.146812 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 15 23:48:08.146833 kernel: Serial: AMBA PL011 UART driver Jan 15 23:48:08.146852 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 15 23:48:08.146870 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 15 23:48:08.146888 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 15 23:48:08.146906 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 15 23:48:08.146924 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 15 23:48:08.146942 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 15 23:48:08.146960 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 15 23:48:08.146978 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 15 23:48:08.147000 kernel: ACPI: Added _OSI(Module Device) Jan 15 23:48:08.147018 kernel: ACPI: Added _OSI(Processor Device) Jan 15 23:48:08.147036 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 15 23:48:08.147053 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 15 23:48:08.147071 kernel: ACPI: Interpreter enabled Jan 15 23:48:08.147089 kernel: ACPI: Using GIC for interrupt routing Jan 15 23:48:08.147107 kernel: ACPI: MCFG table detected, 1 entries Jan 15 23:48:08.147125 kernel: ACPI: CPU0 has been hot-added Jan 15 23:48:08.147143 kernel: ACPI: CPU1 has been hot-added Jan 15 23:48:08.147164 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 15 23:48:08.149558 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 15 23:48:08.149769 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 15 23:48:08.149953 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 15 23:48:08.150134 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 15 23:48:08.151400 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 15 23:48:08.151441 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 15 23:48:08.151469 kernel: acpiphp: Slot [1] registered Jan 15 23:48:08.151488 kernel: acpiphp: Slot [2] registered Jan 15 23:48:08.151506 kernel: acpiphp: Slot [3] registered Jan 15 23:48:08.151524 kernel: acpiphp: Slot [4] registered Jan 15 23:48:08.151542 kernel: acpiphp: Slot [5] registered Jan 15 23:48:08.151559 kernel: acpiphp: Slot [6] registered Jan 15 23:48:08.151577 kernel: acpiphp: Slot [7] registered Jan 15 23:48:08.151595 kernel: acpiphp: Slot [8] registered Jan 15 23:48:08.151612 kernel: acpiphp: Slot [9] registered Jan 15 23:48:08.151630 kernel: acpiphp: Slot [10] registered Jan 15 23:48:08.151652 kernel: acpiphp: Slot [11] registered Jan 15 23:48:08.151670 kernel: acpiphp: Slot [12] registered Jan 15 23:48:08.151688 kernel: acpiphp: Slot [13] registered Jan 15 23:48:08.151706 kernel: acpiphp: Slot [14] registered Jan 15 23:48:08.151724 kernel: acpiphp: Slot [15] registered Jan 15 23:48:08.151741 kernel: acpiphp: Slot [16] registered Jan 15 23:48:08.151760 kernel: acpiphp: Slot [17] registered Jan 15 23:48:08.151778 kernel: acpiphp: Slot [18] registered Jan 15 23:48:08.151795 kernel: acpiphp: Slot [19] registered Jan 15 23:48:08.151816 kernel: acpiphp: Slot [20] registered Jan 15 23:48:08.151835 kernel: acpiphp: Slot [21] registered Jan 15 23:48:08.151853 kernel: acpiphp: Slot [22] registered Jan 15 23:48:08.151870 kernel: acpiphp: Slot [23] registered Jan 15 23:48:08.151889 kernel: acpiphp: Slot [24] registered Jan 15 23:48:08.151906 kernel: acpiphp: Slot [25] registered Jan 15 23:48:08.151924 kernel: acpiphp: Slot [26] registered Jan 15 23:48:08.151942 kernel: acpiphp: Slot [27] registered Jan 15 23:48:08.151960 kernel: acpiphp: Slot [28] registered Jan 15 23:48:08.151978 kernel: acpiphp: Slot [29] registered Jan 15 23:48:08.151999 kernel: acpiphp: Slot [30] registered Jan 15 23:48:08.152017 kernel: acpiphp: Slot [31] registered Jan 15 23:48:08.152035 kernel: PCI host bridge to bus 0000:00 Jan 15 23:48:08.152271 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 15 23:48:08.152449 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 15 23:48:08.152628 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 15 23:48:08.152801 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 15 23:48:08.153035 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 15 23:48:08.155342 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 15 23:48:08.155598 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 15 23:48:08.155810 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 15 23:48:08.156008 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 15 23:48:08.157599 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 15 23:48:08.157840 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 15 23:48:08.158031 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 15 23:48:08.161778 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 15 23:48:08.162054 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 15 23:48:08.162278 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 15 23:48:08.162459 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 15 23:48:08.162627 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 15 23:48:08.162807 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 15 23:48:08.162832 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 15 23:48:08.162851 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 15 23:48:08.162871 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 15 23:48:08.162889 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 15 23:48:08.162908 kernel: iommu: Default domain type: Translated Jan 15 23:48:08.162927 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 15 23:48:08.162945 kernel: efivars: Registered efivars operations Jan 15 23:48:08.162963 kernel: vgaarb: loaded Jan 15 23:48:08.162986 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 15 23:48:08.163005 kernel: VFS: Disk quotas dquot_6.6.0 Jan 15 23:48:08.163024 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 15 23:48:08.163042 kernel: pnp: PnP ACPI init Jan 15 23:48:08.163341 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 15 23:48:08.163373 kernel: pnp: PnP ACPI: found 1 devices Jan 15 23:48:08.163392 kernel: NET: Registered PF_INET protocol family Jan 15 23:48:08.163411 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 15 23:48:08.163438 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 15 23:48:08.163457 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 15 23:48:08.163476 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 15 23:48:08.163494 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 15 23:48:08.163513 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 15 23:48:08.163531 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 23:48:08.163550 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 15 23:48:08.163567 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 15 23:48:08.163585 kernel: PCI: CLS 0 bytes, default 64 Jan 15 23:48:08.163607 kernel: kvm [1]: HYP mode not available Jan 15 23:48:08.163626 kernel: Initialise system trusted keyrings Jan 15 23:48:08.163643 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 15 23:48:08.163662 kernel: Key type asymmetric registered Jan 15 23:48:08.163680 kernel: Asymmetric key parser 'x509' registered Jan 15 23:48:08.163698 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 15 23:48:08.163717 kernel: io scheduler mq-deadline registered Jan 15 23:48:08.163735 kernel: io scheduler kyber registered Jan 15 23:48:08.163755 kernel: io scheduler bfq registered Jan 15 23:48:08.163976 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 15 23:48:08.164004 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 15 23:48:08.164022 kernel: ACPI: button: Power Button [PWRB] Jan 15 23:48:08.164040 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 15 23:48:08.164075 kernel: ACPI: button: Sleep Button [SLPB] Jan 15 23:48:08.164097 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 15 23:48:08.164116 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 15 23:48:08.164464 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 15 23:48:08.164743 kernel: printk: legacy console [ttyS0] disabled Jan 15 23:48:08.165005 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 15 23:48:08.165181 kernel: printk: legacy console [ttyS0] enabled Jan 15 23:48:08.165504 kernel: printk: legacy bootconsole [uart0] disabled Jan 15 23:48:08.165731 kernel: thunder_xcv, ver 1.0 Jan 15 23:48:08.165987 kernel: thunder_bgx, ver 1.0 Jan 15 23:48:08.166061 kernel: nicpf, ver 1.0 Jan 15 23:48:08.166085 kernel: nicvf, ver 1.0 Jan 15 23:48:08.166322 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 15 23:48:08.166512 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-15T23:48:07 UTC (1768520887) Jan 15 23:48:08.166536 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 15 23:48:08.166555 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 15 23:48:08.166573 kernel: watchdog: NMI not fully supported Jan 15 23:48:08.166591 kernel: NET: Registered PF_INET6 protocol family Jan 15 23:48:08.166609 kernel: watchdog: Hard watchdog permanently disabled Jan 15 23:48:08.166627 kernel: Segment Routing with IPv6 Jan 15 23:48:08.166645 kernel: In-situ OAM (IOAM) with IPv6 Jan 15 23:48:08.166663 kernel: NET: Registered PF_PACKET protocol family Jan 15 23:48:08.166686 kernel: Key type dns_resolver registered Jan 15 23:48:08.166704 kernel: registered taskstats version 1 Jan 15 23:48:08.166722 kernel: Loading compiled-in X.509 certificates Jan 15 23:48:08.166740 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: b110dfc7e70ecac41e34f52a0c530f0543b60d51' Jan 15 23:48:08.166758 kernel: Demotion targets for Node 0: null Jan 15 23:48:08.166776 kernel: Key type .fscrypt registered Jan 15 23:48:08.166793 kernel: Key type fscrypt-provisioning registered Jan 15 23:48:08.166811 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 15 23:48:08.166829 kernel: ima: Allocated hash algorithm: sha1 Jan 15 23:48:08.166850 kernel: ima: No architecture policies found Jan 15 23:48:08.166868 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 15 23:48:08.166886 kernel: clk: Disabling unused clocks Jan 15 23:48:08.166904 kernel: PM: genpd: Disabling unused power domains Jan 15 23:48:08.166922 kernel: Warning: unable to open an initial console. Jan 15 23:48:08.166940 kernel: Freeing unused kernel memory: 39552K Jan 15 23:48:08.166958 kernel: Run /init as init process Jan 15 23:48:08.166975 kernel: with arguments: Jan 15 23:48:08.166993 kernel: /init Jan 15 23:48:08.167014 kernel: with environment: Jan 15 23:48:08.167032 kernel: HOME=/ Jan 15 23:48:08.167049 kernel: TERM=linux Jan 15 23:48:08.167069 systemd[1]: Successfully made /usr/ read-only. Jan 15 23:48:08.167094 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 23:48:08.167114 systemd[1]: Detected virtualization amazon. Jan 15 23:48:08.167133 systemd[1]: Detected architecture arm64. Jan 15 23:48:08.167155 systemd[1]: Running in initrd. Jan 15 23:48:08.167174 systemd[1]: No hostname configured, using default hostname. Jan 15 23:48:08.167194 systemd[1]: Hostname set to . Jan 15 23:48:08.167213 systemd[1]: Initializing machine ID from VM UUID. Jan 15 23:48:08.167258 systemd[1]: Queued start job for default target initrd.target. Jan 15 23:48:08.167278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:48:08.167298 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:48:08.167319 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 15 23:48:08.167345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 23:48:08.167365 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 15 23:48:08.167386 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 15 23:48:08.167408 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 15 23:48:08.167428 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 15 23:48:08.167448 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:48:08.167468 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:48:08.167491 systemd[1]: Reached target paths.target - Path Units. Jan 15 23:48:08.167511 systemd[1]: Reached target slices.target - Slice Units. Jan 15 23:48:08.167530 systemd[1]: Reached target swap.target - Swaps. Jan 15 23:48:08.167550 systemd[1]: Reached target timers.target - Timer Units. Jan 15 23:48:08.167569 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 23:48:08.167588 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 23:48:08.167608 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 15 23:48:08.167627 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 15 23:48:08.167647 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:48:08.167670 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 23:48:08.167690 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:48:08.167709 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 23:48:08.167729 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 15 23:48:08.167748 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 23:48:08.167768 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 15 23:48:08.167788 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 15 23:48:08.167808 systemd[1]: Starting systemd-fsck-usr.service... Jan 15 23:48:08.167831 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 23:48:08.167851 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 23:48:08.167870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:48:08.167889 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 15 23:48:08.167910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:48:08.167933 systemd[1]: Finished systemd-fsck-usr.service. Jan 15 23:48:08.167953 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 15 23:48:08.168009 systemd-journald[259]: Collecting audit messages is disabled. Jan 15 23:48:08.168052 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 15 23:48:08.168101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 15 23:48:08.168121 kernel: Bridge firewalling registered Jan 15 23:48:08.168140 systemd-journald[259]: Journal started Jan 15 23:48:08.168176 systemd-journald[259]: Runtime Journal (/run/log/journal/ec2a972cd32448414af0662f537c0302) is 8M, max 75.3M, 67.3M free. Jan 15 23:48:08.127914 systemd-modules-load[260]: Inserted module 'overlay' Jan 15 23:48:08.162877 systemd-modules-load[260]: Inserted module 'br_netfilter' Jan 15 23:48:08.177369 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 23:48:08.178181 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 23:48:08.181494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:48:08.189147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 15 23:48:08.204459 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:48:08.215023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 23:48:08.217157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 23:48:08.250514 systemd-tmpfiles[284]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 15 23:48:08.255491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:48:08.271519 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:48:08.274144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 23:48:08.277749 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:48:08.302030 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 23:48:08.307300 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 15 23:48:08.355184 dracut-cmdline[303]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=83f7d443283b2e87b6283ab8b3252eb2d2356b218981a63efeb3e370fba6f971 Jan 15 23:48:08.402829 systemd-resolved[298]: Positive Trust Anchors: Jan 15 23:48:08.402863 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 23:48:08.402926 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 23:48:08.558259 kernel: SCSI subsystem initialized Jan 15 23:48:08.566314 kernel: Loading iSCSI transport class v2.0-870. Jan 15 23:48:08.580292 kernel: iscsi: registered transport (tcp) Jan 15 23:48:08.602407 kernel: iscsi: registered transport (qla4xxx) Jan 15 23:48:08.602481 kernel: QLogic iSCSI HBA Driver Jan 15 23:48:08.637401 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 23:48:08.676109 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:48:08.691037 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 23:48:08.712293 kernel: random: crng init done Jan 15 23:48:08.711646 systemd-resolved[298]: Defaulting to hostname 'linux'. Jan 15 23:48:08.716722 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 23:48:08.729465 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:48:08.807117 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 15 23:48:08.814323 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 15 23:48:08.903300 kernel: raid6: neonx8 gen() 6473 MB/s Jan 15 23:48:08.921267 kernel: raid6: neonx4 gen() 6458 MB/s Jan 15 23:48:08.939265 kernel: raid6: neonx2 gen() 5399 MB/s Jan 15 23:48:08.956277 kernel: raid6: neonx1 gen() 3934 MB/s Jan 15 23:48:08.973275 kernel: raid6: int64x8 gen() 3547 MB/s Jan 15 23:48:08.991290 kernel: raid6: int64x4 gen() 3679 MB/s Jan 15 23:48:09.009287 kernel: raid6: int64x2 gen() 3537 MB/s Jan 15 23:48:09.027428 kernel: raid6: int64x1 gen() 2748 MB/s Jan 15 23:48:09.027501 kernel: raid6: using algorithm neonx8 gen() 6473 MB/s Jan 15 23:48:09.046583 kernel: raid6: .... xor() 4683 MB/s, rmw enabled Jan 15 23:48:09.046661 kernel: raid6: using neon recovery algorithm Jan 15 23:48:09.056243 kernel: xor: measuring software checksum speed Jan 15 23:48:09.056320 kernel: 8regs : 12971 MB/sec Jan 15 23:48:09.058890 kernel: 32regs : 12349 MB/sec Jan 15 23:48:09.058977 kernel: arm64_neon : 8871 MB/sec Jan 15 23:48:09.059004 kernel: xor: using function: 8regs (12971 MB/sec) Jan 15 23:48:09.157703 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 15 23:48:09.172047 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 15 23:48:09.180444 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:48:09.252468 systemd-udevd[510]: Using default interface naming scheme 'v255'. Jan 15 23:48:09.263949 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:48:09.275492 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 15 23:48:09.322476 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation Jan 15 23:48:09.375895 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 23:48:09.382628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 23:48:09.519160 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:48:09.537293 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 15 23:48:09.722000 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 15 23:48:09.722102 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 15 23:48:09.734318 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 15 23:48:09.734431 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 15 23:48:09.734888 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 15 23:48:09.755850 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 15 23:48:09.756197 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 15 23:48:09.756260 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 15 23:48:09.756498 kernel: GPT:9289727 != 33554431 Jan 15 23:48:09.758926 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 15 23:48:09.760976 kernel: GPT:9289727 != 33554431 Jan 15 23:48:09.763348 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 15 23:48:09.763436 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:cc:f1:59:06:c9 Jan 15 23:48:09.766649 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 15 23:48:09.774188 (udev-worker)[565]: Network interface NamePolicy= disabled on kernel command line. Jan 15 23:48:09.795050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:48:09.796408 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:48:09.804730 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:48:09.811581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:48:09.814779 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:48:09.842260 kernel: nvme nvme0: using unchecked data buffer Jan 15 23:48:09.873113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:48:09.963149 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 15 23:48:10.054596 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 15 23:48:10.057293 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 15 23:48:10.097860 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 15 23:48:10.123640 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 15 23:48:10.127264 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 15 23:48:10.134829 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 23:48:10.142594 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:48:10.146737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 23:48:10.160474 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 15 23:48:10.170967 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 15 23:48:10.195539 disk-uuid[688]: Primary Header is updated. Jan 15 23:48:10.195539 disk-uuid[688]: Secondary Entries is updated. Jan 15 23:48:10.195539 disk-uuid[688]: Secondary Header is updated. Jan 15 23:48:10.213288 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 15 23:48:10.231264 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 15 23:48:11.242261 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 15 23:48:11.243624 disk-uuid[689]: The operation has completed successfully. Jan 15 23:48:11.475317 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 15 23:48:11.478353 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 15 23:48:11.564941 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 15 23:48:11.608662 sh[956]: Success Jan 15 23:48:11.632542 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 15 23:48:11.632616 kernel: device-mapper: uevent: version 1.0.3 Jan 15 23:48:11.634842 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 15 23:48:11.648261 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 15 23:48:11.746851 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 15 23:48:11.755938 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 15 23:48:11.783670 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 15 23:48:11.807969 kernel: BTRFS: device fsid 4e574c26-9d5a-48bc-a727-ae12db8ee9fc devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (979) Jan 15 23:48:11.808077 kernel: BTRFS info (device dm-0): first mount of filesystem 4e574c26-9d5a-48bc-a727-ae12db8ee9fc Jan 15 23:48:11.808129 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:48:11.943783 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 15 23:48:11.943859 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 15 23:48:11.943886 kernel: BTRFS info (device dm-0): enabling free space tree Jan 15 23:48:11.966866 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 15 23:48:11.971995 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 15 23:48:11.977804 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 15 23:48:11.984550 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 15 23:48:11.994581 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 15 23:48:12.052376 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1010) Jan 15 23:48:12.056268 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:48:12.056346 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:48:12.074387 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 15 23:48:12.074482 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 15 23:48:12.083422 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:48:12.084878 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 15 23:48:12.093105 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 15 23:48:12.203023 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 23:48:12.216732 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 23:48:12.309370 systemd-networkd[1149]: lo: Link UP Jan 15 23:48:12.309390 systemd-networkd[1149]: lo: Gained carrier Jan 15 23:48:12.316189 systemd-networkd[1149]: Enumeration completed Jan 15 23:48:12.317158 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 23:48:12.317278 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:48:12.317286 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:48:12.323505 systemd[1]: Reached target network.target - Network. Jan 15 23:48:12.341622 systemd-networkd[1149]: eth0: Link UP Jan 15 23:48:12.341636 systemd-networkd[1149]: eth0: Gained carrier Jan 15 23:48:12.341659 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:48:12.362321 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.28.91/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 15 23:48:12.665471 ignition[1075]: Ignition 2.22.0 Jan 15 23:48:12.665499 ignition[1075]: Stage: fetch-offline Jan 15 23:48:12.669525 ignition[1075]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:48:12.669563 ignition[1075]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 15 23:48:12.676182 ignition[1075]: Ignition finished successfully Jan 15 23:48:12.680469 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 23:48:12.688864 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 15 23:48:12.756713 ignition[1161]: Ignition 2.22.0 Jan 15 23:48:12.756741 ignition[1161]: Stage: fetch Jan 15 23:48:12.757793 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:48:12.757821 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 15 23:48:12.757971 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 15 23:48:12.781255 ignition[1161]: PUT result: OK Jan 15 23:48:12.785970 ignition[1161]: parsed url from cmdline: "" Jan 15 23:48:12.786022 ignition[1161]: no config URL provided Jan 15 23:48:12.786062 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Jan 15 23:48:12.786116 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Jan 15 23:48:12.786172 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 15 23:48:12.794292 ignition[1161]: PUT result: OK Jan 15 23:48:12.794510 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 15 23:48:12.803057 ignition[1161]: GET result: OK Jan 15 23:48:12.805080 ignition[1161]: parsing config with SHA512: ca0495d1f36ed68924289d83f7cb674d886f3758b36adb341d4ad26cd8f25aa5c9bbf8510cc5f124c9fcf063cafd5a3a1d3fe464419cbeb94fb3eb7083fd0282 Jan 15 23:48:12.817567 unknown[1161]: fetched base config from "system" Jan 15 23:48:12.819468 unknown[1161]: fetched base config from "system" Jan 15 23:48:12.819732 unknown[1161]: fetched user config from "aws" Jan 15 23:48:12.820861 ignition[1161]: fetch: fetch complete Jan 15 23:48:12.820877 ignition[1161]: fetch: fetch passed Jan 15 23:48:12.820995 ignition[1161]: Ignition finished successfully Jan 15 23:48:12.834569 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 15 23:48:12.843794 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 15 23:48:12.921971 ignition[1168]: Ignition 2.22.0 Jan 15 23:48:12.922635 ignition[1168]: Stage: kargs Jan 15 23:48:12.923783 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:48:12.923816 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 15 23:48:12.923990 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 15 23:48:12.935949 ignition[1168]: PUT result: OK Jan 15 23:48:12.941302 ignition[1168]: kargs: kargs passed Jan 15 23:48:12.941454 ignition[1168]: Ignition finished successfully Jan 15 23:48:12.950363 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 15 23:48:12.962150 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 15 23:48:13.010921 ignition[1175]: Ignition 2.22.0 Jan 15 23:48:13.010957 ignition[1175]: Stage: disks Jan 15 23:48:13.011579 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Jan 15 23:48:13.011605 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 15 23:48:13.011763 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 15 23:48:13.022806 ignition[1175]: PUT result: OK Jan 15 23:48:13.033752 ignition[1175]: disks: disks passed Jan 15 23:48:13.033972 ignition[1175]: Ignition finished successfully Jan 15 23:48:13.040647 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 15 23:48:13.050703 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 15 23:48:13.057743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 15 23:48:13.068860 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 23:48:13.074104 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 23:48:13.087536 systemd[1]: Reached target basic.target - Basic System. Jan 15 23:48:13.095076 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 15 23:48:13.174373 systemd-fsck[1183]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 15 23:48:13.181255 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 15 23:48:13.185754 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 15 23:48:13.320286 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e775b4a8-7fa9-4c45-80b7-b5e0f0a5e4b9 r/w with ordered data mode. Quota mode: none. Jan 15 23:48:13.322066 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 15 23:48:13.325700 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 15 23:48:13.336625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 23:48:13.344780 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 15 23:48:13.356843 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 15 23:48:13.356974 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 15 23:48:13.357029 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 23:48:13.384481 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 15 23:48:13.391526 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 15 23:48:13.397853 systemd-networkd[1149]: eth0: Gained IPv6LL Jan 15 23:48:13.414266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1202) Jan 15 23:48:13.418838 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:48:13.418914 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:48:13.428148 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 15 23:48:13.428263 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 15 23:48:13.431338 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 23:48:13.818830 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Jan 15 23:48:13.872433 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Jan 15 23:48:13.881266 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Jan 15 23:48:13.892381 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Jan 15 23:48:14.247516 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 15 23:48:14.253687 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 15 23:48:14.265332 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 15 23:48:14.293572 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 15 23:48:14.300253 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:48:14.332144 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 15 23:48:14.351562 ignition[1315]: INFO : Ignition 2.22.0 Jan 15 23:48:14.353857 ignition[1315]: INFO : Stage: mount Jan 15 23:48:14.353857 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:48:14.353857 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 15 23:48:14.353857 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 15 23:48:14.366549 ignition[1315]: INFO : PUT result: OK Jan 15 23:48:14.371978 ignition[1315]: INFO : mount: mount passed Jan 15 23:48:14.374166 ignition[1315]: INFO : Ignition finished successfully Jan 15 23:48:14.377080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 15 23:48:14.384638 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 15 23:48:14.413443 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 15 23:48:14.453249 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1327) Jan 15 23:48:14.453313 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c6a95867-5704-41e1-8beb-48e00b50aef1 Jan 15 23:48:14.457037 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 15 23:48:14.464882 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 15 23:48:14.464935 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 15 23:48:14.468934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 15 23:48:14.518393 ignition[1344]: INFO : Ignition 2.22.0 Jan 15 23:48:14.518393 ignition[1344]: INFO : Stage: files Jan 15 23:48:14.523551 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:48:14.523551 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 15 23:48:14.523551 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 15 23:48:14.523551 ignition[1344]: INFO : PUT result: OK Jan 15 23:48:14.536533 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Jan 15 23:48:14.540519 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 15 23:48:14.540519 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 15 23:48:14.555771 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 15 23:48:14.560202 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 15 23:48:14.560202 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 15 23:48:14.557752 unknown[1344]: wrote ssh authorized keys file for user: core Jan 15 23:48:14.569530 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 15 23:48:14.569530 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 15 23:48:14.660146 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 15 23:48:15.014699 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 15 23:48:15.019681 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 15 23:48:15.024609 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 15 23:48:15.029272 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 15 23:48:15.034546 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 15 23:48:15.039483 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 23:48:15.044593 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 15 23:48:15.049658 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 23:48:15.054966 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 15 23:48:15.065658 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 23:48:15.071702 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 15 23:48:15.076078 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:48:15.076078 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:48:15.076078 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:48:15.076078 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 15 23:48:15.549573 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 15 23:48:15.926563 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 15 23:48:15.926563 ignition[1344]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 15 23:48:15.936947 ignition[1344]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 23:48:15.936947 ignition[1344]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 15 23:48:15.936947 ignition[1344]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 15 23:48:15.936947 ignition[1344]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 15 23:48:15.936947 ignition[1344]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 15 23:48:15.936947 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 15 23:48:15.936947 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 15 23:48:15.936947 ignition[1344]: INFO : files: files passed Jan 15 23:48:15.936947 ignition[1344]: INFO : Ignition finished successfully Jan 15 23:48:15.968673 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 15 23:48:15.976773 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 15 23:48:15.988731 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 15 23:48:16.013565 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 15 23:48:16.014037 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 15 23:48:16.039101 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:48:16.045315 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:48:16.045315 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 15 23:48:16.057699 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 23:48:16.064450 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 15 23:48:16.071608 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 15 23:48:16.171578 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 15 23:48:16.172077 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 15 23:48:16.181879 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 15 23:48:16.187041 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 15 23:48:16.193460 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 15 23:48:16.199596 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 15 23:48:16.257311 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 23:48:16.264709 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 15 23:48:16.299181 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:48:16.306786 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:48:16.310610 systemd[1]: Stopped target timers.target - Timer Units. Jan 15 23:48:16.314342 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 15 23:48:16.314921 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 15 23:48:16.330189 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 15 23:48:16.336622 systemd[1]: Stopped target basic.target - Basic System. Jan 15 23:48:16.341949 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 15 23:48:16.345979 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 15 23:48:16.355052 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 15 23:48:16.359326 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 15 23:48:16.368653 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 15 23:48:16.372556 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 15 23:48:16.381556 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 15 23:48:16.385011 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 15 23:48:16.394424 systemd[1]: Stopped target swap.target - Swaps. Jan 15 23:48:16.397001 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 15 23:48:16.397311 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 15 23:48:16.408255 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:48:16.412171 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:48:16.422869 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 15 23:48:16.425873 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:48:16.430161 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 15 23:48:16.430469 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 15 23:48:16.443767 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 15 23:48:16.444381 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 15 23:48:16.450862 systemd[1]: ignition-files.service: Deactivated successfully. Jan 15 23:48:16.451236 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 15 23:48:16.462594 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 15 23:48:16.470464 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 15 23:48:16.473668 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:48:16.490130 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 15 23:48:16.497439 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 15 23:48:16.500678 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:48:16.509770 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 15 23:48:16.510038 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 15 23:48:16.531714 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 15 23:48:16.536504 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 15 23:48:16.565427 ignition[1397]: INFO : Ignition 2.22.0 Jan 15 23:48:16.569690 ignition[1397]: INFO : Stage: umount Jan 15 23:48:16.572601 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 15 23:48:16.572601 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 15 23:48:16.579357 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 15 23:48:16.584869 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 15 23:48:16.587454 ignition[1397]: INFO : PUT result: OK Jan 15 23:48:16.596886 ignition[1397]: INFO : umount: umount passed Jan 15 23:48:16.596886 ignition[1397]: INFO : Ignition finished successfully Jan 15 23:48:16.600262 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 15 23:48:16.600469 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 15 23:48:16.608477 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 15 23:48:16.608627 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 15 23:48:16.617593 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 15 23:48:16.617704 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 15 23:48:16.625771 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 15 23:48:16.625871 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 15 23:48:16.635513 systemd[1]: Stopped target network.target - Network. Jan 15 23:48:16.637957 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 15 23:48:16.638076 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 15 23:48:16.646148 systemd[1]: Stopped target paths.target - Path Units. Jan 15 23:48:16.649265 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 15 23:48:16.660792 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:48:16.664457 systemd[1]: Stopped target slices.target - Slice Units. Jan 15 23:48:16.674194 systemd[1]: Stopped target sockets.target - Socket Units. Jan 15 23:48:16.677833 systemd[1]: iscsid.socket: Deactivated successfully. Jan 15 23:48:16.677915 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 15 23:48:16.685748 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 15 23:48:16.685836 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 15 23:48:16.689173 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 15 23:48:16.689317 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 15 23:48:16.702033 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 15 23:48:16.702342 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 15 23:48:16.711593 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 15 23:48:16.731087 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 15 23:48:16.747622 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 15 23:48:16.752520 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 15 23:48:16.765962 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 15 23:48:16.766730 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 15 23:48:16.766953 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 15 23:48:16.777988 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 15 23:48:16.778837 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 15 23:48:16.779063 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 15 23:48:16.793210 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 15 23:48:16.798923 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 15 23:48:16.799008 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:48:16.802809 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 15 23:48:16.802931 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 15 23:48:16.815212 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 15 23:48:16.828749 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 15 23:48:16.828902 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 15 23:48:16.839124 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 15 23:48:16.839263 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:48:16.844570 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 15 23:48:16.844671 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 15 23:48:16.847675 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 15 23:48:16.847788 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:48:16.857532 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:48:16.869121 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 15 23:48:16.869445 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:48:16.900568 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 15 23:48:16.901255 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:48:16.913743 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 15 23:48:16.914160 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 15 23:48:16.924129 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 15 23:48:16.924515 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 15 23:48:16.933183 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 15 23:48:16.933463 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:48:16.941551 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 15 23:48:16.941675 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 15 23:48:16.951168 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 15 23:48:16.951460 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 15 23:48:16.959287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 15 23:48:16.959416 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 15 23:48:16.970660 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 15 23:48:16.977108 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 15 23:48:16.977295 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:48:16.980907 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 15 23:48:16.981006 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:48:16.998489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 15 23:48:16.998597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:48:17.007522 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 15 23:48:17.007646 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 15 23:48:17.007735 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 15 23:48:17.033522 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 15 23:48:17.033972 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 15 23:48:17.045090 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 15 23:48:17.051950 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 15 23:48:17.097628 systemd[1]: Switching root. Jan 15 23:48:17.175075 systemd-journald[259]: Journal stopped Jan 15 23:48:19.882851 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Jan 15 23:48:19.882986 kernel: SELinux: policy capability network_peer_controls=1 Jan 15 23:48:19.883033 kernel: SELinux: policy capability open_perms=1 Jan 15 23:48:19.883064 kernel: SELinux: policy capability extended_socket_class=1 Jan 15 23:48:19.883104 kernel: SELinux: policy capability always_check_network=0 Jan 15 23:48:19.883137 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 15 23:48:19.883170 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 15 23:48:19.883209 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 15 23:48:19.883308 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 15 23:48:19.883343 kernel: SELinux: policy capability userspace_initial_context=0 Jan 15 23:48:19.883378 kernel: audit: type=1403 audit(1768520897.679:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 15 23:48:19.883421 systemd[1]: Successfully loaded SELinux policy in 134.825ms. Jan 15 23:48:19.883465 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.685ms. Jan 15 23:48:19.883502 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 15 23:48:19.883536 systemd[1]: Detected virtualization amazon. Jan 15 23:48:19.883571 systemd[1]: Detected architecture arm64. Jan 15 23:48:19.883602 systemd[1]: Detected first boot. Jan 15 23:48:19.883632 systemd[1]: Initializing machine ID from VM UUID. Jan 15 23:48:19.883664 zram_generator::config[1441]: No configuration found. Jan 15 23:48:19.883705 kernel: NET: Registered PF_VSOCK protocol family Jan 15 23:48:19.883736 systemd[1]: Populated /etc with preset unit settings. Jan 15 23:48:19.883770 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 15 23:48:19.883803 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 15 23:48:19.883836 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 15 23:48:19.883875 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 15 23:48:19.883905 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 15 23:48:19.883939 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 15 23:48:19.883974 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 15 23:48:19.884032 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 15 23:48:19.884065 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 15 23:48:19.884102 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 15 23:48:19.884135 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 15 23:48:19.884176 systemd[1]: Created slice user.slice - User and Session Slice. Jan 15 23:48:19.884207 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 15 23:48:19.886329 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 15 23:48:19.886388 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 15 23:48:19.886422 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 15 23:48:19.886452 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 15 23:48:19.886484 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 15 23:48:19.886520 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 15 23:48:19.886552 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 15 23:48:19.886592 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 15 23:48:19.886621 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 15 23:48:19.886654 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 15 23:48:19.886685 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 15 23:48:19.886715 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 15 23:48:19.886745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 15 23:48:19.886776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 15 23:48:19.886805 systemd[1]: Reached target slices.target - Slice Units. Jan 15 23:48:19.886839 systemd[1]: Reached target swap.target - Swaps. Jan 15 23:48:19.886869 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 15 23:48:19.886899 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 15 23:48:19.886931 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 15 23:48:19.886960 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 15 23:48:19.886990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 15 23:48:19.887019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 15 23:48:19.887049 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 15 23:48:19.887082 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 15 23:48:19.887118 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 15 23:48:19.887150 systemd[1]: Mounting media.mount - External Media Directory... Jan 15 23:48:19.887182 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 15 23:48:19.887244 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 15 23:48:19.887287 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 15 23:48:19.887319 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 15 23:48:19.887350 systemd[1]: Reached target machines.target - Containers. Jan 15 23:48:19.887379 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 15 23:48:19.887409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:48:19.887446 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 15 23:48:19.887478 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 15 23:48:19.887508 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:48:19.887542 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 23:48:19.887576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:48:19.887609 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 15 23:48:19.887639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:48:19.887669 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 15 23:48:19.887706 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 15 23:48:19.887736 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 15 23:48:19.887766 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 15 23:48:19.887798 systemd[1]: Stopped systemd-fsck-usr.service. Jan 15 23:48:19.887830 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:48:19.887861 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 15 23:48:19.887888 kernel: loop: module loaded Jan 15 23:48:19.887920 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 15 23:48:19.887952 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 15 23:48:19.888011 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 15 23:48:19.888051 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 15 23:48:19.888083 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 15 23:48:19.888115 systemd[1]: verity-setup.service: Deactivated successfully. Jan 15 23:48:19.888146 systemd[1]: Stopped verity-setup.service. Jan 15 23:48:19.888183 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 15 23:48:19.888212 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 15 23:48:19.890350 kernel: fuse: init (API version 7.41) Jan 15 23:48:19.890388 systemd[1]: Mounted media.mount - External Media Directory. Jan 15 23:48:19.890420 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 15 23:48:19.890456 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 15 23:48:19.890488 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 15 23:48:19.890518 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 15 23:48:19.890549 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 15 23:48:19.890579 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 15 23:48:19.890608 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:48:19.890639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:48:19.890669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:48:19.890698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:48:19.890810 systemd-journald[1524]: Collecting audit messages is disabled. Jan 15 23:48:19.890875 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 15 23:48:19.890907 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 15 23:48:19.890935 systemd-journald[1524]: Journal started Jan 15 23:48:19.890983 systemd-journald[1524]: Runtime Journal (/run/log/journal/ec2a972cd32448414af0662f537c0302) is 8M, max 75.3M, 67.3M free. Jan 15 23:48:19.187068 systemd[1]: Queued start job for default target multi-user.target. Jan 15 23:48:19.201014 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 15 23:48:19.201963 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 15 23:48:19.899630 systemd[1]: Started systemd-journald.service - Journal Service. Jan 15 23:48:19.903694 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:48:19.904286 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:48:19.914870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 15 23:48:19.922550 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 15 23:48:19.929506 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 15 23:48:19.969983 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 15 23:48:19.980347 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 15 23:48:19.990289 kernel: ACPI: bus type drm_connector registered Jan 15 23:48:19.995425 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 15 23:48:20.000886 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 15 23:48:20.000955 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 15 23:48:20.009851 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 15 23:48:20.016655 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 15 23:48:20.021925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:48:20.030300 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 15 23:48:20.041712 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 15 23:48:20.047351 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 23:48:20.049916 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 15 23:48:20.054436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 23:48:20.060622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 15 23:48:20.073566 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 15 23:48:20.090433 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 15 23:48:20.097535 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 23:48:20.097950 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 23:48:20.106708 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 15 23:48:20.110872 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 15 23:48:20.116006 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 15 23:48:20.141717 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 15 23:48:20.162180 systemd-journald[1524]: Time spent on flushing to /var/log/journal/ec2a972cd32448414af0662f537c0302 is 211.817ms for 924 entries. Jan 15 23:48:20.162180 systemd-journald[1524]: System Journal (/var/log/journal/ec2a972cd32448414af0662f537c0302) is 8M, max 195.6M, 187.6M free. Jan 15 23:48:20.398987 systemd-journald[1524]: Received client request to flush runtime journal. Jan 15 23:48:20.400001 kernel: loop0: detected capacity change from 0 to 100632 Jan 15 23:48:20.167548 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 15 23:48:20.173829 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 15 23:48:20.183515 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 15 23:48:20.234644 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 15 23:48:20.291952 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 15 23:48:20.302269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 15 23:48:20.340524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 15 23:48:20.395523 systemd-tmpfiles[1586]: ACLs are not supported, ignoring. Jan 15 23:48:20.425772 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 15 23:48:20.395805 systemd-tmpfiles[1586]: ACLs are not supported, ignoring. Jan 15 23:48:20.408340 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 15 23:48:20.432807 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 15 23:48:20.437076 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 15 23:48:20.445499 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 15 23:48:20.456623 kernel: loop1: detected capacity change from 0 to 207008 Jan 15 23:48:20.582276 kernel: loop2: detected capacity change from 0 to 61264 Jan 15 23:48:20.722276 kernel: loop3: detected capacity change from 0 to 119840 Jan 15 23:48:20.840270 kernel: loop4: detected capacity change from 0 to 100632 Jan 15 23:48:20.875588 kernel: loop5: detected capacity change from 0 to 207008 Jan 15 23:48:20.912262 kernel: loop6: detected capacity change from 0 to 61264 Jan 15 23:48:20.936304 kernel: loop7: detected capacity change from 0 to 119840 Jan 15 23:48:20.951101 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 15 23:48:20.956855 (sd-merge)[1603]: Merged extensions into '/usr'. Jan 15 23:48:20.965455 systemd[1]: Reload requested from client PID 1572 ('systemd-sysext') (unit systemd-sysext.service)... Jan 15 23:48:20.966069 systemd[1]: Reloading... Jan 15 23:48:21.214331 zram_generator::config[1632]: No configuration found. Jan 15 23:48:21.759567 systemd[1]: Reloading finished in 792 ms. Jan 15 23:48:21.791283 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 15 23:48:21.797847 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 15 23:48:21.819517 systemd[1]: Starting ensure-sysext.service... Jan 15 23:48:21.825515 ldconfig[1567]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 15 23:48:21.826703 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 15 23:48:21.842996 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 15 23:48:21.866907 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 15 23:48:21.883606 systemd[1]: Reload requested from client PID 1681 ('systemctl') (unit ensure-sysext.service)... Jan 15 23:48:21.883646 systemd[1]: Reloading... Jan 15 23:48:21.897963 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 15 23:48:21.898065 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 15 23:48:21.898758 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 15 23:48:21.901725 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 15 23:48:21.907959 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 15 23:48:21.909566 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jan 15 23:48:21.910108 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Jan 15 23:48:21.934118 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 23:48:21.934140 systemd-tmpfiles[1682]: Skipping /boot Jan 15 23:48:21.978816 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Jan 15 23:48:21.979032 systemd-tmpfiles[1682]: Skipping /boot Jan 15 23:48:22.026588 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Jan 15 23:48:22.085293 zram_generator::config[1713]: No configuration found. Jan 15 23:48:22.397482 (udev-worker)[1718]: Network interface NamePolicy= disabled on kernel command line. Jan 15 23:48:22.734721 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 15 23:48:22.736891 systemd[1]: Reloading finished in 852 ms. Jan 15 23:48:22.776809 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 15 23:48:22.810065 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 15 23:48:22.845183 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 23:48:22.854814 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 15 23:48:22.864559 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 15 23:48:22.875847 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 15 23:48:22.883143 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 15 23:48:22.949665 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 15 23:48:22.968948 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 15 23:48:22.979157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:48:22.986302 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 15 23:48:22.998435 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 15 23:48:23.009908 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 15 23:48:23.013477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:48:23.013792 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:48:23.020063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:48:23.020814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:48:23.021553 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:48:23.031347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 15 23:48:23.036990 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 15 23:48:23.042664 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 15 23:48:23.042985 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 15 23:48:23.043366 systemd[1]: Reached target time-set.target - System Time Set. Jan 15 23:48:23.093657 systemd[1]: Finished ensure-sysext.service. Jan 15 23:48:23.128754 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 15 23:48:23.149348 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 15 23:48:23.161809 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 15 23:48:23.220854 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 15 23:48:23.222713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 15 23:48:23.236171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 15 23:48:23.238452 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 15 23:48:23.243003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 15 23:48:23.292516 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 15 23:48:23.296962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 15 23:48:23.297363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 15 23:48:23.301404 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 15 23:48:23.304456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 15 23:48:23.310262 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 15 23:48:23.346414 augenrules[1925]: No rules Jan 15 23:48:23.350706 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 23:48:23.352547 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 23:48:23.373057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 15 23:48:23.373394 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 15 23:48:23.601443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 15 23:48:23.616107 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 15 23:48:23.699925 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 15 23:48:23.714472 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 15 23:48:23.772369 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 15 23:48:23.860320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 15 23:48:23.934772 systemd-resolved[1843]: Positive Trust Anchors: Jan 15 23:48:23.935376 systemd-resolved[1843]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 15 23:48:23.935572 systemd-resolved[1843]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 15 23:48:23.946259 systemd-networkd[1841]: lo: Link UP Jan 15 23:48:23.946283 systemd-networkd[1841]: lo: Gained carrier Jan 15 23:48:23.949778 systemd-resolved[1843]: Defaulting to hostname 'linux'. Jan 15 23:48:23.950888 systemd-networkd[1841]: Enumeration completed Jan 15 23:48:23.951186 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 15 23:48:23.955325 systemd-networkd[1841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:48:23.955348 systemd-networkd[1841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 15 23:48:23.959005 systemd-networkd[1841]: eth0: Link UP Jan 15 23:48:23.959334 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 15 23:48:23.964313 systemd-networkd[1841]: eth0: Gained carrier Jan 15 23:48:23.964367 systemd-networkd[1841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 15 23:48:23.967678 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 15 23:48:23.971874 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 15 23:48:23.975188 systemd[1]: Reached target network.target - Network. Jan 15 23:48:23.978063 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 15 23:48:23.981527 systemd[1]: Reached target sysinit.target - System Initialization. Jan 15 23:48:23.984638 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 15 23:48:23.988677 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 15 23:48:23.992973 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 15 23:48:23.996652 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 15 23:48:24.001566 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 15 23:48:24.008156 systemd-networkd[1841]: eth0: DHCPv4 address 172.31.28.91/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 15 23:48:24.009437 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 15 23:48:24.009493 systemd[1]: Reached target paths.target - Path Units. Jan 15 23:48:24.012473 systemd[1]: Reached target timers.target - Timer Units. Jan 15 23:48:24.017287 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 15 23:48:24.029949 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 15 23:48:24.039580 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 15 23:48:24.043555 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 15 23:48:24.047092 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 15 23:48:24.063483 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 15 23:48:24.066692 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 15 23:48:24.071090 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 15 23:48:24.074236 systemd[1]: Reached target sockets.target - Socket Units. Jan 15 23:48:24.078259 systemd[1]: Reached target basic.target - Basic System. Jan 15 23:48:24.081040 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 15 23:48:24.081090 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 15 23:48:24.084400 systemd[1]: Starting containerd.service - containerd container runtime... Jan 15 23:48:24.091543 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 15 23:48:24.100580 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 15 23:48:24.107804 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 15 23:48:24.114947 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 15 23:48:24.128629 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 15 23:48:24.132385 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 15 23:48:24.135893 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 15 23:48:24.145633 systemd[1]: Started ntpd.service - Network Time Service. Jan 15 23:48:24.156569 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 15 23:48:24.162867 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 15 23:48:24.172628 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 15 23:48:24.189424 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 15 23:48:24.207165 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 15 23:48:24.212873 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 15 23:48:24.230788 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 15 23:48:24.241686 systemd[1]: Starting update-engine.service - Update Engine... Jan 15 23:48:24.245643 jq[1968]: false Jan 15 23:48:24.250903 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 15 23:48:24.259289 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 15 23:48:24.270084 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 15 23:48:24.274963 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 15 23:48:24.276355 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 15 23:48:24.337479 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 15 23:48:24.340421 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 15 23:48:24.381900 systemd[1]: motdgen.service: Deactivated successfully. Jan 15 23:48:24.382414 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 15 23:48:24.394850 extend-filesystems[1969]: Found /dev/nvme0n1p6 Jan 15 23:48:24.398877 (ntainerd)[1995]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 15 23:48:24.412042 jq[1983]: true Jan 15 23:48:24.423258 extend-filesystems[1969]: Found /dev/nvme0n1p9 Jan 15 23:48:24.435753 extend-filesystems[1969]: Checking size of /dev/nvme0n1p9 Jan 15 23:48:24.458434 tar[1988]: linux-arm64/LICENSE Jan 15 23:48:24.467130 tar[1988]: linux-arm64/helm Jan 15 23:48:24.500688 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 15 23:48:24.507819 extend-filesystems[1969]: Resized partition /dev/nvme0n1p9 Jan 15 23:48:24.526262 extend-filesystems[2020]: resize2fs 1.47.3 (8-Jul-2025) Jan 15 23:48:24.549437 jq[2011]: true Jan 15 23:48:24.545752 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 15 23:48:24.545177 dbus-daemon[1966]: [system] SELinux support is enabled Jan 15 23:48:24.555368 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 15 23:48:24.556758 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 15 23:48:24.560638 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 15 23:48:24.560680 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 15 23:48:24.590676 update_engine[1980]: I20260115 23:48:24.584429 1980 main.cc:92] Flatcar Update Engine starting Jan 15 23:48:24.585566 dbus-daemon[1966]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1841 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 15 23:48:24.595668 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 15 23:48:24.595796 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: ntpd 4.2.8p18@1.4062-o Thu Jan 15 21:31:42 UTC 2026 (1): Starting Jan 15 23:48:24.595796 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 15 23:48:24.595796 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: ---------------------------------------------------- Jan 15 23:48:24.595796 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, Jan 15 23:48:24.595796 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 15 23:48:24.595796 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: corporation. Support and training for ntp-4 are Jan 15 23:48:24.595796 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: available at https://www.nwtime.org/support Jan 15 23:48:24.595796 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: ---------------------------------------------------- Jan 15 23:48:24.593994 ntpd[1971]: ntpd 4.2.8p18@1.4062-o Thu Jan 15 21:31:42 UTC 2026 (1): Starting Jan 15 23:48:24.594097 ntpd[1971]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 15 23:48:24.594117 ntpd[1971]: ---------------------------------------------------- Jan 15 23:48:24.594134 ntpd[1971]: ntp-4 is maintained by Network Time Foundation, Jan 15 23:48:24.594150 ntpd[1971]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 15 23:48:24.594166 ntpd[1971]: corporation. Support and training for ntp-4 are Jan 15 23:48:24.594183 ntpd[1971]: available at https://www.nwtime.org/support Jan 15 23:48:24.594200 ntpd[1971]: ---------------------------------------------------- Jan 15 23:48:24.623761 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 15 23:48:24.628111 ntpd[1971]: proto: precision = 0.096 usec (-23) Jan 15 23:48:24.630587 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: proto: precision = 0.096 usec (-23) Jan 15 23:48:24.650427 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: basedate set to 2026-01-03 Jan 15 23:48:24.650427 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: gps base set to 2026-01-04 (week 2400) Jan 15 23:48:24.650427 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 Jan 15 23:48:24.650427 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 15 23:48:24.650427 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 Jan 15 23:48:24.650427 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: Listen normally on 3 eth0 172.31.28.91:123 Jan 15 23:48:24.650427 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: Listen normally on 4 lo [::1]:123 Jan 15 23:48:24.650427 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: bind(21) AF_INET6 [fe80::4cc:f1ff:fe59:6c9%2]:123 flags 0x811 failed: Cannot assign requested address Jan 15 23:48:24.650427 ntpd[1971]: 15 Jan 23:48:24 ntpd[1971]: unable to create socket on eth0 (5) for [fe80::4cc:f1ff:fe59:6c9%2]:123 Jan 15 23:48:24.636864 ntpd[1971]: basedate set to 2026-01-03 Jan 15 23:48:24.636903 ntpd[1971]: gps base set to 2026-01-04 (week 2400) Jan 15 23:48:24.637098 ntpd[1971]: Listen and drop on 0 v6wildcard [::]:123 Jan 15 23:48:24.637143 ntpd[1971]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 15 23:48:24.637519 ntpd[1971]: Listen normally on 2 lo 127.0.0.1:123 Jan 15 23:48:24.637572 ntpd[1971]: Listen normally on 3 eth0 172.31.28.91:123 Jan 15 23:48:24.637626 ntpd[1971]: Listen normally on 4 lo [::1]:123 Jan 15 23:48:24.637677 ntpd[1971]: bind(21) AF_INET6 [fe80::4cc:f1ff:fe59:6c9%2]:123 flags 0x811 failed: Cannot assign requested address Jan 15 23:48:24.637716 ntpd[1971]: unable to create socket on eth0 (5) for [fe80::4cc:f1ff:fe59:6c9%2]:123 Jan 15 23:48:24.654201 systemd-coredump[2026]: Process 1971 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 15 23:48:24.658009 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 15 23:48:24.664266 systemd[1]: Started update-engine.service - Update Engine. Jan 15 23:48:24.677488 update_engine[1980]: I20260115 23:48:24.668529 1980 update_check_scheduler.cc:74] Next update check in 7m47s Jan 15 23:48:24.668611 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 15 23:48:24.688476 systemd[1]: Started systemd-coredump@0-2026-0.service - Process Core Dump (PID 2026/UID 0). Jan 15 23:48:24.729969 coreos-metadata[1965]: Jan 15 23:48:24.728 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 15 23:48:24.730556 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 15 23:48:24.736317 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 15 23:48:24.760456 coreos-metadata[1965]: Jan 15 23:48:24.760 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 15 23:48:24.769737 coreos-metadata[1965]: Jan 15 23:48:24.769 INFO Fetch successful Jan 15 23:48:24.769737 coreos-metadata[1965]: Jan 15 23:48:24.769 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 15 23:48:24.775693 coreos-metadata[1965]: Jan 15 23:48:24.775 INFO Fetch successful Jan 15 23:48:24.775693 coreos-metadata[1965]: Jan 15 23:48:24.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 15 23:48:24.776749 coreos-metadata[1965]: Jan 15 23:48:24.776 INFO Fetch successful Jan 15 23:48:24.776749 coreos-metadata[1965]: Jan 15 23:48:24.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 15 23:48:24.780507 coreos-metadata[1965]: Jan 15 23:48:24.780 INFO Fetch successful Jan 15 23:48:24.780507 coreos-metadata[1965]: Jan 15 23:48:24.780 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 15 23:48:24.781492 coreos-metadata[1965]: Jan 15 23:48:24.781 INFO Fetch failed with 404: resource not found Jan 15 23:48:24.781492 coreos-metadata[1965]: Jan 15 23:48:24.781 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 15 23:48:24.786736 coreos-metadata[1965]: Jan 15 23:48:24.786 INFO Fetch successful Jan 15 23:48:24.786736 coreos-metadata[1965]: Jan 15 23:48:24.786 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 15 23:48:24.787774 coreos-metadata[1965]: Jan 15 23:48:24.787 INFO Fetch successful Jan 15 23:48:24.788329 coreos-metadata[1965]: Jan 15 23:48:24.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 15 23:48:24.789038 coreos-metadata[1965]: Jan 15 23:48:24.788 INFO Fetch successful Jan 15 23:48:24.789038 coreos-metadata[1965]: Jan 15 23:48:24.788 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 15 23:48:24.807065 coreos-metadata[1965]: Jan 15 23:48:24.805 INFO Fetch successful Jan 15 23:48:24.807065 coreos-metadata[1965]: Jan 15 23:48:24.805 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 15 23:48:24.809965 coreos-metadata[1965]: Jan 15 23:48:24.808 INFO Fetch successful Jan 15 23:48:24.830997 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 15 23:48:24.866567 extend-filesystems[2020]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 15 23:48:24.866567 extend-filesystems[2020]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 15 23:48:24.866567 extend-filesystems[2020]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 15 23:48:24.892355 extend-filesystems[1969]: Resized filesystem in /dev/nvme0n1p9 Jan 15 23:48:24.881409 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 15 23:48:24.908380 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 15 23:48:24.983573 bash[2053]: Updated "/home/core/.ssh/authorized_keys" Jan 15 23:48:24.997012 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 15 23:48:25.005599 systemd[1]: Starting sshkeys.service... Jan 15 23:48:25.058947 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 15 23:48:25.062613 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 15 23:48:25.103281 systemd-logind[1976]: Watching system buttons on /dev/input/event0 (Power Button) Jan 15 23:48:25.103388 systemd-logind[1976]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 15 23:48:25.104591 systemd-logind[1976]: New seat seat0. Jan 15 23:48:25.106689 systemd[1]: Started systemd-logind.service - User Login Management. Jan 15 23:48:25.151419 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 15 23:48:25.160640 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 15 23:48:25.453909 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 15 23:48:25.462600 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 15 23:48:25.469137 dbus-daemon[1966]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2027 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 15 23:48:25.482654 systemd[1]: Starting polkit.service - Authorization Manager... Jan 15 23:48:25.565973 coreos-metadata[2077]: Jan 15 23:48:25.564 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 15 23:48:25.565973 coreos-metadata[2077]: Jan 15 23:48:25.565 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 15 23:48:25.571019 containerd[1995]: time="2026-01-15T23:48:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 15 23:48:25.571472 coreos-metadata[2077]: Jan 15 23:48:25.568 INFO Fetch successful Jan 15 23:48:25.571472 coreos-metadata[2077]: Jan 15 23:48:25.571 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 15 23:48:25.574448 containerd[1995]: time="2026-01-15T23:48:25.573725605Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 15 23:48:25.574574 coreos-metadata[2077]: Jan 15 23:48:25.573 INFO Fetch successful Jan 15 23:48:25.579082 unknown[2077]: wrote ssh authorized keys file for user: core Jan 15 23:48:25.677306 update-ssh-keys[2143]: Updated "/home/core/.ssh/authorized_keys" Jan 15 23:48:25.681046 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.668877998Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.02µs" Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.693401450Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.693471890Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.693778358Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.693829466Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.693898298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.694032110Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.694063838Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.694557146Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.694604750Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.694636802Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 15 23:48:25.696111 containerd[1995]: time="2026-01-15T23:48:25.694658906Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 15 23:48:25.696691 containerd[1995]: time="2026-01-15T23:48:25.694871018Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 15 23:48:25.696820 systemd[1]: Finished sshkeys.service. Jan 15 23:48:25.707557 containerd[1995]: time="2026-01-15T23:48:25.706418330Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 23:48:25.707557 containerd[1995]: time="2026-01-15T23:48:25.706543358Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 15 23:48:25.707557 containerd[1995]: time="2026-01-15T23:48:25.706576238Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 15 23:48:25.707557 containerd[1995]: time="2026-01-15T23:48:25.706658474Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 15 23:48:25.707557 containerd[1995]: time="2026-01-15T23:48:25.707073350Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 15 23:48:25.707557 containerd[1995]: time="2026-01-15T23:48:25.707247362Z" level=info msg="metadata content store policy set" policy=shared Jan 15 23:48:25.720128 containerd[1995]: time="2026-01-15T23:48:25.718796030Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 15 23:48:25.720128 containerd[1995]: time="2026-01-15T23:48:25.719023994Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 15 23:48:25.720128 containerd[1995]: time="2026-01-15T23:48:25.719079794Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 15 23:48:25.720128 containerd[1995]: time="2026-01-15T23:48:25.719119094Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 15 23:48:25.720128 containerd[1995]: time="2026-01-15T23:48:25.719152898Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 15 23:48:25.720128 containerd[1995]: time="2026-01-15T23:48:25.719186354Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 15 23:48:25.720128 containerd[1995]: time="2026-01-15T23:48:25.719304086Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 15 23:48:25.720521 containerd[1995]: time="2026-01-15T23:48:25.720317942Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 15 23:48:25.720521 containerd[1995]: time="2026-01-15T23:48:25.720389438Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 15 23:48:25.720521 containerd[1995]: time="2026-01-15T23:48:25.720419018Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 15 23:48:25.720521 containerd[1995]: time="2026-01-15T23:48:25.720445922Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 15 23:48:25.720521 containerd[1995]: time="2026-01-15T23:48:25.720479162Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 15 23:48:25.721541 containerd[1995]: time="2026-01-15T23:48:25.721380842Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 15 23:48:25.721541 containerd[1995]: time="2026-01-15T23:48:25.721451150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 15 23:48:25.721541 containerd[1995]: time="2026-01-15T23:48:25.721494890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 15 23:48:25.721541 containerd[1995]: time="2026-01-15T23:48:25.721528526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 15 23:48:25.721777 containerd[1995]: time="2026-01-15T23:48:25.721558034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 15 23:48:25.721777 containerd[1995]: time="2026-01-15T23:48:25.721588586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 15 23:48:25.721777 containerd[1995]: time="2026-01-15T23:48:25.721618310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 15 23:48:25.721777 containerd[1995]: time="2026-01-15T23:48:25.721646954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 15 23:48:25.721777 containerd[1995]: time="2026-01-15T23:48:25.721677050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 15 23:48:25.721777 containerd[1995]: time="2026-01-15T23:48:25.721705646Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 15 23:48:25.721777 containerd[1995]: time="2026-01-15T23:48:25.721733126Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 15 23:48:25.726763 containerd[1995]: time="2026-01-15T23:48:25.724646114Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 15 23:48:25.726763 containerd[1995]: time="2026-01-15T23:48:25.724729394Z" level=info msg="Start snapshots syncer" Jan 15 23:48:25.726763 containerd[1995]: time="2026-01-15T23:48:25.724791938Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 15 23:48:25.728263 containerd[1995]: time="2026-01-15T23:48:25.727257038Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 15 23:48:25.728263 containerd[1995]: time="2026-01-15T23:48:25.727386314Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 15 23:48:25.729867 containerd[1995]: time="2026-01-15T23:48:25.729710402Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 15 23:48:25.730207 containerd[1995]: time="2026-01-15T23:48:25.730135634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 15 23:48:25.730310 containerd[1995]: time="2026-01-15T23:48:25.730211786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 15 23:48:25.730310 containerd[1995]: time="2026-01-15T23:48:25.730273154Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 15 23:48:25.730398 containerd[1995]: time="2026-01-15T23:48:25.730304966Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 15 23:48:25.730398 containerd[1995]: time="2026-01-15T23:48:25.730339442Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 15 23:48:25.730398 containerd[1995]: time="2026-01-15T23:48:25.730368722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 15 23:48:25.730586 containerd[1995]: time="2026-01-15T23:48:25.730398302Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733290314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733371338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733411094Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733538246Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733668854Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733699514Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733728314Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733767626Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733805450Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.733835354Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.734015486Z" level=info msg="runtime interface created" Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.734038622Z" level=info msg="created NRI interface" Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.734067974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.734101838Z" level=info msg="Connect containerd service" Jan 15 23:48:25.735292 containerd[1995]: time="2026-01-15T23:48:25.734158358Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 15 23:48:25.741428 systemd-networkd[1841]: eth0: Gained IPv6LL Jan 15 23:48:25.748942 containerd[1995]: time="2026-01-15T23:48:25.742348658Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 15 23:48:25.778851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 15 23:48:25.813869 systemd[1]: Reached target network-online.target - Network is Online. Jan 15 23:48:25.816276 systemd-coredump[2029]: Process 1971 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1971: #0 0x0000aaaab0b90b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaab0b3fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaab0b40240 n/a (ntpd + 0x10240) #3 0x0000aaaab0b3be14 n/a (ntpd + 0xbe14) #4 0x0000aaaab0b3d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaab0b45a38 n/a (ntpd + 0x15a38) #6 0x0000aaaab0b3738c n/a (ntpd + 0x738c) #7 0x0000ffff96a82034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff96a82118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaab0b373f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Jan 15 23:48:25.824714 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 15 23:48:25.837410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:48:25.848931 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 15 23:48:25.861119 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 15 23:48:25.861450 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 15 23:48:25.893446 systemd[1]: systemd-coredump@0-2026-0.service: Deactivated successfully. Jan 15 23:48:25.945596 locksmithd[2031]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 15 23:48:25.964373 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 15 23:48:25.970693 systemd[1]: Started ntpd.service - Network Time Service. Jan 15 23:48:26.043356 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 15 23:48:26.198111 ntpd[2185]: ntpd 4.2.8p18@1.4062-o Thu Jan 15 21:31:42 UTC 2026 (1): Starting Jan 15 23:48:26.202471 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: ntpd 4.2.8p18@1.4062-o Thu Jan 15 21:31:42 UTC 2026 (1): Starting Jan 15 23:48:26.202471 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 15 23:48:26.202471 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: ---------------------------------------------------- Jan 15 23:48:26.202471 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: ntp-4 is maintained by Network Time Foundation, Jan 15 23:48:26.202471 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 15 23:48:26.202471 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: corporation. Support and training for ntp-4 are Jan 15 23:48:26.202471 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: available at https://www.nwtime.org/support Jan 15 23:48:26.202471 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: ---------------------------------------------------- Jan 15 23:48:26.201413 ntpd[2185]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 15 23:48:26.201434 ntpd[2185]: ---------------------------------------------------- Jan 15 23:48:26.201469 ntpd[2185]: ntp-4 is maintained by Network Time Foundation, Jan 15 23:48:26.201498 ntpd[2185]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 15 23:48:26.201533 ntpd[2185]: corporation. Support and training for ntp-4 are Jan 15 23:48:26.201567 ntpd[2185]: available at https://www.nwtime.org/support Jan 15 23:48:26.201597 ntpd[2185]: ---------------------------------------------------- Jan 15 23:48:26.212441 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: proto: precision = 0.096 usec (-23) Jan 15 23:48:26.212441 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: basedate set to 2026-01-03 Jan 15 23:48:26.212441 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: gps base set to 2026-01-04 (week 2400) Jan 15 23:48:26.212441 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: Listen and drop on 0 v6wildcard [::]:123 Jan 15 23:48:26.212441 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 15 23:48:26.208121 ntpd[2185]: proto: precision = 0.096 usec (-23) Jan 15 23:48:26.210888 ntpd[2185]: basedate set to 2026-01-03 Jan 15 23:48:26.210916 ntpd[2185]: gps base set to 2026-01-04 (week 2400) Jan 15 23:48:26.211055 ntpd[2185]: Listen and drop on 0 v6wildcard [::]:123 Jan 15 23:48:26.211101 ntpd[2185]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 15 23:48:26.216475 ntpd[2185]: Listen normally on 2 lo 127.0.0.1:123 Jan 15 23:48:26.219352 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: Listen normally on 2 lo 127.0.0.1:123 Jan 15 23:48:26.219352 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: Listen normally on 3 eth0 172.31.28.91:123 Jan 15 23:48:26.219352 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: Listen normally on 4 lo [::1]:123 Jan 15 23:48:26.219352 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: Listen normally on 5 eth0 [fe80::4cc:f1ff:fe59:6c9%2]:123 Jan 15 23:48:26.219352 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: Listening on routing socket on fd #22 for interface updates Jan 15 23:48:26.216537 ntpd[2185]: Listen normally on 3 eth0 172.31.28.91:123 Jan 15 23:48:26.216583 ntpd[2185]: Listen normally on 4 lo [::1]:123 Jan 15 23:48:26.216626 ntpd[2185]: Listen normally on 5 eth0 [fe80::4cc:f1ff:fe59:6c9%2]:123 Jan 15 23:48:26.216668 ntpd[2185]: Listening on routing socket on fd #22 for interface updates Jan 15 23:48:26.265727 ntpd[2185]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 15 23:48:26.270451 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 15 23:48:26.270451 ntpd[2185]: 15 Jan 23:48:26 ntpd[2185]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 15 23:48:26.265799 ntpd[2185]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 15 23:48:26.331756 amazon-ssm-agent[2163]: Initializing new seelog logger Jan 15 23:48:26.333730 amazon-ssm-agent[2163]: New Seelog Logger Creation Complete Jan 15 23:48:26.333730 amazon-ssm-agent[2163]: 2026/01/15 23:48:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:26.333730 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:26.335701 polkitd[2128]: Started polkitd version 126 Jan 15 23:48:26.341865 amazon-ssm-agent[2163]: 2026/01/15 23:48:26 processing appconfig overrides Jan 15 23:48:26.347342 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.3418 INFO Proxy environment variables: Jan 15 23:48:26.348088 amazon-ssm-agent[2163]: 2026/01/15 23:48:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:26.348088 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:26.350975 amazon-ssm-agent[2163]: 2026/01/15 23:48:26 processing appconfig overrides Jan 15 23:48:26.352073 amazon-ssm-agent[2163]: 2026/01/15 23:48:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:26.352073 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:26.352773 amazon-ssm-agent[2163]: 2026/01/15 23:48:26 processing appconfig overrides Jan 15 23:48:26.365837 amazon-ssm-agent[2163]: 2026/01/15 23:48:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:26.365837 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:26.365837 amazon-ssm-agent[2163]: 2026/01/15 23:48:26 processing appconfig overrides Jan 15 23:48:26.371151 polkitd[2128]: Loading rules from directory /etc/polkit-1/rules.d Jan 15 23:48:26.374571 polkitd[2128]: Loading rules from directory /run/polkit-1/rules.d Jan 15 23:48:26.374683 polkitd[2128]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 15 23:48:26.378273 polkitd[2128]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 15 23:48:26.378388 polkitd[2128]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 15 23:48:26.381065 polkitd[2128]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 15 23:48:26.389671 polkitd[2128]: Finished loading, compiling and executing 2 rules Jan 15 23:48:26.394050 systemd[1]: Started polkit.service - Authorization Manager. Jan 15 23:48:26.404716 dbus-daemon[1966]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 15 23:48:26.406769 polkitd[2128]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.432448202Z" level=info msg="Start subscribing containerd event" Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.432602378Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.432623270Z" level=info msg="Start recovering state" Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.432703538Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.435333830Z" level=info msg="Start event monitor" Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.435414266Z" level=info msg="Start cni network conf syncer for default" Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.435436250Z" level=info msg="Start streaming server" Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.435457610Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.435485630Z" level=info msg="runtime interface starting up..." Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.435529598Z" level=info msg="starting plugins..." Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.435561614Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 15 23:48:26.436041 containerd[1995]: time="2026-01-15T23:48:26.435850322Z" level=info msg="containerd successfully booted in 0.869116s" Jan 15 23:48:26.436006 systemd[1]: Started containerd.service - containerd container runtime. Jan 15 23:48:26.450422 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.3420 INFO https_proxy: Jan 15 23:48:26.462293 systemd-resolved[1843]: System hostname changed to 'ip-172-31-28-91'. Jan 15 23:48:26.462423 systemd-hostnamed[2027]: Hostname set to (transient) Jan 15 23:48:26.570356 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.3420 INFO http_proxy: Jan 15 23:48:26.671238 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.3469 INFO no_proxy: Jan 15 23:48:26.770294 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.3505 INFO Checking if agent identity type OnPrem can be assumed Jan 15 23:48:26.873267 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.3508 INFO Checking if agent identity type EC2 can be assumed Jan 15 23:48:26.953933 tar[1988]: linux-arm64/README.md Jan 15 23:48:26.973081 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.5843 INFO Agent will take identity from EC2 Jan 15 23:48:26.992297 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 15 23:48:27.073240 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.5864 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 15 23:48:27.171745 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.5864 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 15 23:48:27.270913 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.5865 INFO [amazon-ssm-agent] Starting Core Agent Jan 15 23:48:27.271801 sshd_keygen[2005]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 15 23:48:27.320168 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 15 23:48:27.328482 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 15 23:48:27.335888 systemd[1]: Started sshd@0-172.31.28.91:22-20.161.92.111:43134.service - OpenSSH per-connection server daemon (20.161.92.111:43134). Jan 15 23:48:27.372771 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.5865 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 15 23:48:27.382201 systemd[1]: issuegen.service: Deactivated successfully. Jan 15 23:48:27.384609 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 15 23:48:27.393897 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 15 23:48:27.439317 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 15 23:48:27.446260 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 15 23:48:27.453832 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 15 23:48:27.458386 systemd[1]: Reached target getty.target - Login Prompts. Jan 15 23:48:27.474514 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.5865 INFO [Registrar] Starting registrar module Jan 15 23:48:27.574563 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.5961 INFO [EC2Identity] Checking disk for registration info Jan 15 23:48:27.674885 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.5962 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 15 23:48:27.775300 amazon-ssm-agent[2163]: 2026-01-15 23:48:26.5962 INFO [EC2Identity] Generating registration keypair Jan 15 23:48:27.780257 amazon-ssm-agent[2163]: 2026/01/15 23:48:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:27.780257 amazon-ssm-agent[2163]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 15 23:48:27.780557 amazon-ssm-agent[2163]: 2026/01/15 23:48:27 processing appconfig overrides Jan 15 23:48:27.812088 amazon-ssm-agent[2163]: 2026-01-15 23:48:27.7387 INFO [EC2Identity] Checking write access before registering Jan 15 23:48:27.812365 amazon-ssm-agent[2163]: 2026-01-15 23:48:27.7394 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 15 23:48:27.812365 amazon-ssm-agent[2163]: 2026-01-15 23:48:27.7798 INFO [EC2Identity] EC2 registration was successful. Jan 15 23:48:27.812562 amazon-ssm-agent[2163]: 2026-01-15 23:48:27.7799 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 15 23:48:27.812562 amazon-ssm-agent[2163]: 2026-01-15 23:48:27.7800 INFO [CredentialRefresher] credentialRefresher has started Jan 15 23:48:27.812562 amazon-ssm-agent[2163]: 2026-01-15 23:48:27.7800 INFO [CredentialRefresher] Starting credentials refresher loop Jan 15 23:48:27.812562 amazon-ssm-agent[2163]: 2026-01-15 23:48:27.8115 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 15 23:48:27.812973 amazon-ssm-agent[2163]: 2026-01-15 23:48:27.8120 INFO [CredentialRefresher] Credentials ready Jan 15 23:48:27.877201 amazon-ssm-agent[2163]: 2026-01-15 23:48:27.8128 INFO [CredentialRefresher] Next credential rotation will be in 29.9999788477 minutes Jan 15 23:48:27.964922 sshd[2233]: Accepted publickey for core from 20.161.92.111 port 43134 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:48:27.969348 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:48:27.984180 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 15 23:48:27.989826 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 15 23:48:28.022379 systemd-logind[1976]: New session 1 of user core. Jan 15 23:48:28.035748 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 15 23:48:28.046744 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 15 23:48:28.072834 (systemd)[2245]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 15 23:48:28.080948 systemd-logind[1976]: New session c1 of user core. Jan 15 23:48:28.378332 systemd[2245]: Queued start job for default target default.target. Jan 15 23:48:28.386915 systemd[2245]: Created slice app.slice - User Application Slice. Jan 15 23:48:28.387163 systemd[2245]: Reached target paths.target - Paths. Jan 15 23:48:28.387287 systemd[2245]: Reached target timers.target - Timers. Jan 15 23:48:28.389734 systemd[2245]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 15 23:48:28.419627 systemd[2245]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 15 23:48:28.419867 systemd[2245]: Reached target sockets.target - Sockets. Jan 15 23:48:28.419967 systemd[2245]: Reached target basic.target - Basic System. Jan 15 23:48:28.420051 systemd[2245]: Reached target default.target - Main User Target. Jan 15 23:48:28.420111 systemd[2245]: Startup finished in 326ms. Jan 15 23:48:28.421054 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 15 23:48:28.435524 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 15 23:48:28.800847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:48:28.806055 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 15 23:48:28.810965 systemd[1]: Started sshd@1-172.31.28.91:22-20.161.92.111:43144.service - OpenSSH per-connection server daemon (20.161.92.111:43144). Jan 15 23:48:28.817311 systemd[1]: Startup finished in 3.739s (kernel) + 9.918s (initrd) + 11.273s (userspace) = 24.931s. Jan 15 23:48:28.836182 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:48:28.878382 amazon-ssm-agent[2163]: 2026-01-15 23:48:28.8782 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 15 23:48:28.979061 amazon-ssm-agent[2163]: 2026-01-15 23:48:28.8893 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2268) started Jan 15 23:48:29.080724 amazon-ssm-agent[2163]: 2026-01-15 23:48:28.8893 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 15 23:48:29.396948 sshd[2262]: Accepted publickey for core from 20.161.92.111 port 43144 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:48:29.398589 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:48:29.408017 systemd-logind[1976]: New session 2 of user core. Jan 15 23:48:29.418503 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 15 23:48:29.749260 sshd[2286]: Connection closed by 20.161.92.111 port 43144 Jan 15 23:48:29.749903 sshd-session[2262]: pam_unix(sshd:session): session closed for user core Jan 15 23:48:29.757525 systemd[1]: sshd@1-172.31.28.91:22-20.161.92.111:43144.service: Deactivated successfully. Jan 15 23:48:29.763429 systemd[1]: session-2.scope: Deactivated successfully. Jan 15 23:48:29.768590 systemd-logind[1976]: Session 2 logged out. Waiting for processes to exit. Jan 15 23:48:29.771798 systemd-logind[1976]: Removed session 2. Jan 15 23:48:29.916667 systemd[1]: Started sshd@2-172.31.28.91:22-20.161.92.111:43160.service - OpenSSH per-connection server daemon (20.161.92.111:43160). Jan 15 23:48:30.216414 kubelet[2260]: E0115 23:48:30.216252 2260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:48:30.221892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:48:30.222273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:48:30.223009 systemd[1]: kubelet.service: Consumed 1.509s CPU time, 255.4M memory peak. Jan 15 23:48:30.430543 sshd[2292]: Accepted publickey for core from 20.161.92.111 port 43160 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:48:30.432922 sshd-session[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:48:30.442422 systemd-logind[1976]: New session 3 of user core. Jan 15 23:48:30.463541 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 15 23:48:30.776649 sshd[2296]: Connection closed by 20.161.92.111 port 43160 Jan 15 23:48:30.777494 sshd-session[2292]: pam_unix(sshd:session): session closed for user core Jan 15 23:48:30.786176 systemd[1]: sshd@2-172.31.28.91:22-20.161.92.111:43160.service: Deactivated successfully. Jan 15 23:48:30.790821 systemd[1]: session-3.scope: Deactivated successfully. Jan 15 23:48:30.793564 systemd-logind[1976]: Session 3 logged out. Waiting for processes to exit. Jan 15 23:48:30.796920 systemd-logind[1976]: Removed session 3. Jan 15 23:48:30.870158 systemd[1]: Started sshd@3-172.31.28.91:22-20.161.92.111:43174.service - OpenSSH per-connection server daemon (20.161.92.111:43174). Jan 15 23:48:31.398472 sshd[2303]: Accepted publickey for core from 20.161.92.111 port 43174 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:48:31.399567 sshd-session[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:48:31.408310 systemd-logind[1976]: New session 4 of user core. Jan 15 23:48:31.417535 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 15 23:48:31.760958 sshd[2306]: Connection closed by 20.161.92.111 port 43174 Jan 15 23:48:31.760837 sshd-session[2303]: pam_unix(sshd:session): session closed for user core Jan 15 23:48:31.766976 systemd-logind[1976]: Session 4 logged out. Waiting for processes to exit. Jan 15 23:48:31.767591 systemd[1]: sshd@3-172.31.28.91:22-20.161.92.111:43174.service: Deactivated successfully. Jan 15 23:48:31.771065 systemd[1]: session-4.scope: Deactivated successfully. Jan 15 23:48:31.776036 systemd-logind[1976]: Removed session 4. Jan 15 23:48:31.855865 systemd[1]: Started sshd@4-172.31.28.91:22-20.161.92.111:43182.service - OpenSSH per-connection server daemon (20.161.92.111:43182). Jan 15 23:48:32.383142 sshd[2312]: Accepted publickey for core from 20.161.92.111 port 43182 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:48:32.385435 sshd-session[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:48:32.392808 systemd-logind[1976]: New session 5 of user core. Jan 15 23:48:32.405487 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 15 23:48:32.707286 sudo[2316]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 15 23:48:32.707954 sudo[2316]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:48:32.723180 sudo[2316]: pam_unix(sudo:session): session closed for user root Jan 15 23:48:32.801028 sshd[2315]: Connection closed by 20.161.92.111 port 43182 Jan 15 23:48:32.801610 sshd-session[2312]: pam_unix(sshd:session): session closed for user core Jan 15 23:48:32.810664 systemd-logind[1976]: Session 5 logged out. Waiting for processes to exit. Jan 15 23:48:32.812085 systemd[1]: sshd@4-172.31.28.91:22-20.161.92.111:43182.service: Deactivated successfully. Jan 15 23:48:32.815195 systemd[1]: session-5.scope: Deactivated successfully. Jan 15 23:48:32.819031 systemd-logind[1976]: Removed session 5. Jan 15 23:48:32.893680 systemd[1]: Started sshd@5-172.31.28.91:22-20.161.92.111:57244.service - OpenSSH per-connection server daemon (20.161.92.111:57244). Jan 15 23:48:32.893543 systemd-resolved[1843]: Clock change detected. Flushing caches. Jan 15 23:48:32.900632 systemd-journald[1524]: Time jumped backwards, rotating. Jan 15 23:48:33.115487 sshd[2322]: Accepted publickey for core from 20.161.92.111 port 57244 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:48:33.117801 sshd-session[2322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:48:33.125594 systemd-logind[1976]: New session 6 of user core. Jan 15 23:48:33.144729 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 15 23:48:33.397692 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 15 23:48:33.398281 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:48:33.407801 sudo[2328]: pam_unix(sudo:session): session closed for user root Jan 15 23:48:33.417573 sudo[2327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 15 23:48:33.418179 sudo[2327]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:48:33.437870 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 15 23:48:33.497271 augenrules[2350]: No rules Jan 15 23:48:33.499646 systemd[1]: audit-rules.service: Deactivated successfully. Jan 15 23:48:33.500115 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 15 23:48:33.502270 sudo[2327]: pam_unix(sudo:session): session closed for user root Jan 15 23:48:33.582500 sshd[2326]: Connection closed by 20.161.92.111 port 57244 Jan 15 23:48:33.583246 sshd-session[2322]: pam_unix(sshd:session): session closed for user core Jan 15 23:48:33.589871 systemd-logind[1976]: Session 6 logged out. Waiting for processes to exit. Jan 15 23:48:33.591639 systemd[1]: sshd@5-172.31.28.91:22-20.161.92.111:57244.service: Deactivated successfully. Jan 15 23:48:33.594645 systemd[1]: session-6.scope: Deactivated successfully. Jan 15 23:48:33.598757 systemd-logind[1976]: Removed session 6. Jan 15 23:48:33.688302 systemd[1]: Started sshd@6-172.31.28.91:22-20.161.92.111:57246.service - OpenSSH per-connection server daemon (20.161.92.111:57246). Jan 15 23:48:34.222350 sshd[2359]: Accepted publickey for core from 20.161.92.111 port 57246 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:48:34.224682 sshd-session[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:48:34.233554 systemd-logind[1976]: New session 7 of user core. Jan 15 23:48:34.247747 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 15 23:48:34.502923 sudo[2363]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 15 23:48:34.503696 sudo[2363]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 15 23:48:35.528609 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 15 23:48:35.543015 (dockerd)[2382]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 15 23:48:36.087340 dockerd[2382]: time="2026-01-15T23:48:36.087245723Z" level=info msg="Starting up" Jan 15 23:48:36.089049 dockerd[2382]: time="2026-01-15T23:48:36.088980287Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 15 23:48:36.108991 dockerd[2382]: time="2026-01-15T23:48:36.108918095Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 15 23:48:36.173788 dockerd[2382]: time="2026-01-15T23:48:36.173684340Z" level=info msg="Loading containers: start." Jan 15 23:48:36.190261 kernel: Initializing XFRM netlink socket Jan 15 23:48:36.573148 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Jan 15 23:48:36.649553 systemd-networkd[1841]: docker0: Link UP Jan 15 23:48:36.663409 dockerd[2382]: time="2026-01-15T23:48:36.663337610Z" level=info msg="Loading containers: done." Jan 15 23:48:36.689024 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck404658600-merged.mount: Deactivated successfully. Jan 15 23:48:36.693791 dockerd[2382]: time="2026-01-15T23:48:36.693733826Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 15 23:48:36.694017 dockerd[2382]: time="2026-01-15T23:48:36.693847010Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 15 23:48:36.694017 dockerd[2382]: time="2026-01-15T23:48:36.693993050Z" level=info msg="Initializing buildkit" Jan 15 23:48:36.744925 dockerd[2382]: time="2026-01-15T23:48:36.744857691Z" level=info msg="Completed buildkit initialization" Jan 15 23:48:36.760062 dockerd[2382]: time="2026-01-15T23:48:36.759982827Z" level=info msg="Daemon has completed initialization" Jan 15 23:48:36.760414 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 15 23:48:36.761148 dockerd[2382]: time="2026-01-15T23:48:36.760264407Z" level=info msg="API listen on /run/docker.sock" Jan 15 23:48:38.798690 containerd[1995]: time="2026-01-15T23:48:38.798638813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 15 23:48:39.453785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121324176.mount: Deactivated successfully. Jan 15 23:48:39.931717 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 15 23:48:39.934172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:48:40.355881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:48:40.371359 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:48:40.483506 kubelet[2660]: E0115 23:48:40.483392 2660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:48:40.492605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:48:40.492915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:48:40.494646 systemd[1]: kubelet.service: Consumed 343ms CPU time, 105.4M memory peak. Jan 15 23:48:41.026488 containerd[1995]: time="2026-01-15T23:48:41.024862648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:41.030199 containerd[1995]: time="2026-01-15T23:48:41.030070636Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 15 23:48:41.034223 containerd[1995]: time="2026-01-15T23:48:41.034176052Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:41.041484 containerd[1995]: time="2026-01-15T23:48:41.041401060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:41.043611 containerd[1995]: time="2026-01-15T23:48:41.043551616Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.244850103s" Jan 15 23:48:41.043737 containerd[1995]: time="2026-01-15T23:48:41.043613704Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 15 23:48:41.044517 containerd[1995]: time="2026-01-15T23:48:41.044429032Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 15 23:48:42.661503 containerd[1995]: time="2026-01-15T23:48:42.660835520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:42.663688 containerd[1995]: time="2026-01-15T23:48:42.663643376Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 15 23:48:42.666430 containerd[1995]: time="2026-01-15T23:48:42.666386936Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:42.673708 containerd[1995]: time="2026-01-15T23:48:42.673586516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:42.676265 containerd[1995]: time="2026-01-15T23:48:42.675698456Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.631161244s" Jan 15 23:48:42.676265 containerd[1995]: time="2026-01-15T23:48:42.675759704Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 15 23:48:42.676944 containerd[1995]: time="2026-01-15T23:48:42.676906208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 15 23:48:44.178397 containerd[1995]: time="2026-01-15T23:48:44.178305356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:44.182920 containerd[1995]: time="2026-01-15T23:48:44.182858612Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 15 23:48:44.186339 containerd[1995]: time="2026-01-15T23:48:44.186274988Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:44.198500 containerd[1995]: time="2026-01-15T23:48:44.197651792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:44.200514 containerd[1995]: time="2026-01-15T23:48:44.200432036Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.523385776s" Jan 15 23:48:44.200667 containerd[1995]: time="2026-01-15T23:48:44.200638736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 15 23:48:44.201741 containerd[1995]: time="2026-01-15T23:48:44.201691076Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 15 23:48:45.465699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295334984.mount: Deactivated successfully. Jan 15 23:48:46.037766 containerd[1995]: time="2026-01-15T23:48:46.037713945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:46.040626 containerd[1995]: time="2026-01-15T23:48:46.040585773Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 15 23:48:46.042791 containerd[1995]: time="2026-01-15T23:48:46.042721401Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:46.049492 containerd[1995]: time="2026-01-15T23:48:46.047953533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:46.049492 containerd[1995]: time="2026-01-15T23:48:46.049325901Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.847336733s" Jan 15 23:48:46.049492 containerd[1995]: time="2026-01-15T23:48:46.049378737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 15 23:48:46.051124 containerd[1995]: time="2026-01-15T23:48:46.051039129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 15 23:48:46.606681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3303609271.mount: Deactivated successfully. Jan 15 23:48:47.899004 containerd[1995]: time="2026-01-15T23:48:47.898923110Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:47.902192 containerd[1995]: time="2026-01-15T23:48:47.901688318Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 15 23:48:47.904581 containerd[1995]: time="2026-01-15T23:48:47.904521446Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:47.910357 containerd[1995]: time="2026-01-15T23:48:47.910289546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:47.912666 containerd[1995]: time="2026-01-15T23:48:47.912604574Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.861480329s" Jan 15 23:48:47.912752 containerd[1995]: time="2026-01-15T23:48:47.912663110Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 15 23:48:47.914564 containerd[1995]: time="2026-01-15T23:48:47.914524166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 15 23:48:48.435524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3382700605.mount: Deactivated successfully. Jan 15 23:48:48.451530 containerd[1995]: time="2026-01-15T23:48:48.451158709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:48:48.454982 containerd[1995]: time="2026-01-15T23:48:48.454916365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 15 23:48:48.457275 containerd[1995]: time="2026-01-15T23:48:48.457192897Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:48:48.463274 containerd[1995]: time="2026-01-15T23:48:48.463184533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 15 23:48:48.464866 containerd[1995]: time="2026-01-15T23:48:48.464487541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 549.785499ms" Jan 15 23:48:48.464866 containerd[1995]: time="2026-01-15T23:48:48.464544301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 15 23:48:48.465609 containerd[1995]: time="2026-01-15T23:48:48.465309049Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 15 23:48:49.022307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252386851.mount: Deactivated successfully. Jan 15 23:48:50.682278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 15 23:48:50.686781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:48:51.130722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:48:51.143496 (kubelet)[2799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 15 23:48:51.229767 kubelet[2799]: E0115 23:48:51.229708 2799 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 15 23:48:51.237577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 15 23:48:51.238176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 15 23:48:51.239162 systemd[1]: kubelet.service: Consumed 330ms CPU time, 104.8M memory peak. Jan 15 23:48:51.551091 containerd[1995]: time="2026-01-15T23:48:51.550996888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:51.553523 containerd[1995]: time="2026-01-15T23:48:51.552967600Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 15 23:48:51.555650 containerd[1995]: time="2026-01-15T23:48:51.555574540Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:51.561187 containerd[1995]: time="2026-01-15T23:48:51.561110776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:48:51.563773 containerd[1995]: time="2026-01-15T23:48:51.563202700Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.097843287s" Jan 15 23:48:51.563773 containerd[1995]: time="2026-01-15T23:48:51.563263396Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 15 23:48:56.190056 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 15 23:49:00.093126 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:49:00.094063 systemd[1]: kubelet.service: Consumed 330ms CPU time, 104.8M memory peak. Jan 15 23:49:00.099009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:49:00.155896 systemd[1]: Reload requested from client PID 2838 ('systemctl') (unit session-7.scope)... Jan 15 23:49:00.155930 systemd[1]: Reloading... Jan 15 23:49:00.408545 zram_generator::config[2885]: No configuration found. Jan 15 23:49:00.868872 systemd[1]: Reloading finished in 712 ms. Jan 15 23:49:00.951479 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 15 23:49:00.951680 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 15 23:49:00.953564 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:49:00.953647 systemd[1]: kubelet.service: Consumed 224ms CPU time, 94.9M memory peak. Jan 15 23:49:00.958147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:49:01.291970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:49:01.309039 (kubelet)[2945]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 23:49:01.382163 kubelet[2945]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:49:01.382163 kubelet[2945]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 23:49:01.382163 kubelet[2945]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:49:01.382701 kubelet[2945]: I0115 23:49:01.382246 2945 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 23:49:02.077959 kubelet[2945]: I0115 23:49:02.077889 2945 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 23:49:02.077959 kubelet[2945]: I0115 23:49:02.077938 2945 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 23:49:02.078454 kubelet[2945]: I0115 23:49:02.078405 2945 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 23:49:02.129124 kubelet[2945]: I0115 23:49:02.129061 2945 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 23:49:02.131051 kubelet[2945]: E0115 23:49:02.130981 2945 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.28.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.91:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:49:02.143809 kubelet[2945]: I0115 23:49:02.143767 2945 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 23:49:02.150498 kubelet[2945]: I0115 23:49:02.149934 2945 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 23:49:02.151781 kubelet[2945]: I0115 23:49:02.151727 2945 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 23:49:02.152176 kubelet[2945]: I0115 23:49:02.151901 2945 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-91","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 23:49:02.152538 kubelet[2945]: I0115 23:49:02.152518 2945 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 23:49:02.152626 kubelet[2945]: I0115 23:49:02.152610 2945 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 23:49:02.153028 kubelet[2945]: I0115 23:49:02.153009 2945 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:49:02.158601 kubelet[2945]: I0115 23:49:02.158567 2945 kubelet.go:446] "Attempting to sync node with API server" Jan 15 23:49:02.158746 kubelet[2945]: I0115 23:49:02.158727 2945 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 23:49:02.158857 kubelet[2945]: I0115 23:49:02.158840 2945 kubelet.go:352] "Adding apiserver pod source" Jan 15 23:49:02.158966 kubelet[2945]: I0115 23:49:02.158931 2945 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 23:49:02.165537 kubelet[2945]: W0115 23:49:02.164636 2945 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.28.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-91&limit=500&resourceVersion=0": dial tcp 172.31.28.91:6443: connect: connection refused Jan 15 23:49:02.165537 kubelet[2945]: E0115 23:49:02.164752 2945 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.28.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-91&limit=500&resourceVersion=0\": dial tcp 172.31.28.91:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:49:02.165537 kubelet[2945]: W0115 23:49:02.164956 2945 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.28.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.28.91:6443: connect: connection refused Jan 15 23:49:02.165537 kubelet[2945]: E0115 23:49:02.165028 2945 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.28.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.91:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:49:02.165537 kubelet[2945]: I0115 23:49:02.165305 2945 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 15 23:49:02.167015 kubelet[2945]: I0115 23:49:02.166979 2945 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 23:49:02.167347 kubelet[2945]: W0115 23:49:02.167327 2945 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 15 23:49:02.179977 kubelet[2945]: I0115 23:49:02.179924 2945 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 23:49:02.180125 kubelet[2945]: I0115 23:49:02.179991 2945 server.go:1287] "Started kubelet" Jan 15 23:49:02.184196 kubelet[2945]: I0115 23:49:02.184134 2945 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 23:49:02.185823 kubelet[2945]: I0115 23:49:02.185793 2945 server.go:479] "Adding debug handlers to kubelet server" Jan 15 23:49:02.189017 kubelet[2945]: I0115 23:49:02.188914 2945 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 23:49:02.189421 kubelet[2945]: I0115 23:49:02.189378 2945 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 23:49:02.191087 kubelet[2945]: I0115 23:49:02.191049 2945 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 23:49:02.191496 kubelet[2945]: E0115 23:49:02.191019 2945 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.91:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.91:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-91.188b0c6d4879c1c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-91,UID:ip-172-31-28-91,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-91,},FirstTimestamp:2026-01-15 23:49:02.179959237 +0000 UTC m=+0.863920541,LastTimestamp:2026-01-15 23:49:02.179959237 +0000 UTC m=+0.863920541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-91,}" Jan 15 23:49:02.192515 kubelet[2945]: I0115 23:49:02.191710 2945 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 23:49:02.202308 kubelet[2945]: E0115 23:49:02.202230 2945 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-91\" not found" Jan 15 23:49:02.202417 kubelet[2945]: I0115 23:49:02.202331 2945 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 23:49:02.202741 kubelet[2945]: I0115 23:49:02.202697 2945 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 23:49:02.202831 kubelet[2945]: I0115 23:49:02.202805 2945 reconciler.go:26] "Reconciler: start to sync state" Jan 15 23:49:02.204117 kubelet[2945]: W0115 23:49:02.204039 2945 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.91:6443: connect: connection refused Jan 15 23:49:02.204232 kubelet[2945]: E0115 23:49:02.204130 2945 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.91:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:49:02.204586 kubelet[2945]: I0115 23:49:02.204502 2945 factory.go:221] Registration of the systemd container factory successfully Jan 15 23:49:02.204708 kubelet[2945]: I0115 23:49:02.204671 2945 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 23:49:02.205523 kubelet[2945]: E0115 23:49:02.205444 2945 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 23:49:02.206063 kubelet[2945]: E0115 23:49:02.206001 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-91?timeout=10s\": dial tcp 172.31.28.91:6443: connect: connection refused" interval="200ms" Jan 15 23:49:02.208797 kubelet[2945]: I0115 23:49:02.208762 2945 factory.go:221] Registration of the containerd container factory successfully Jan 15 23:49:02.235619 kubelet[2945]: I0115 23:49:02.235551 2945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 23:49:02.239118 kubelet[2945]: I0115 23:49:02.238998 2945 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 23:49:02.239118 kubelet[2945]: I0115 23:49:02.239045 2945 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 23:49:02.239118 kubelet[2945]: I0115 23:49:02.239089 2945 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 23:49:02.239118 kubelet[2945]: I0115 23:49:02.239106 2945 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 23:49:02.239364 kubelet[2945]: E0115 23:49:02.239170 2945 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 23:49:02.245397 kubelet[2945]: W0115 23:49:02.245064 2945 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.91:6443: connect: connection refused Jan 15 23:49:02.245875 kubelet[2945]: E0115 23:49:02.245802 2945 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.91:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:49:02.249042 kubelet[2945]: I0115 23:49:02.248925 2945 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 23:49:02.249042 kubelet[2945]: I0115 23:49:02.248989 2945 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 23:49:02.249423 kubelet[2945]: I0115 23:49:02.249017 2945 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:49:02.256074 kubelet[2945]: I0115 23:49:02.256039 2945 policy_none.go:49] "None policy: Start" Jan 15 23:49:02.256304 kubelet[2945]: I0115 23:49:02.256237 2945 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 23:49:02.256304 kubelet[2945]: I0115 23:49:02.256265 2945 state_mem.go:35] "Initializing new in-memory state store" Jan 15 23:49:02.269717 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 15 23:49:02.301139 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 15 23:49:02.303489 kubelet[2945]: E0115 23:49:02.302664 2945 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-91\" not found" Jan 15 23:49:02.316645 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 15 23:49:02.319912 kubelet[2945]: I0115 23:49:02.319879 2945 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 23:49:02.322714 kubelet[2945]: I0115 23:49:02.322683 2945 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 23:49:02.322965 kubelet[2945]: I0115 23:49:02.322878 2945 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 23:49:02.323609 kubelet[2945]: I0115 23:49:02.323586 2945 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 23:49:02.324971 kubelet[2945]: E0115 23:49:02.324919 2945 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 23:49:02.325157 kubelet[2945]: E0115 23:49:02.325137 2945 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-91\" not found" Jan 15 23:49:02.362311 systemd[1]: Created slice kubepods-burstable-podfc5687e9763800aa4bdb61d16feb9a93.slice - libcontainer container kubepods-burstable-podfc5687e9763800aa4bdb61d16feb9a93.slice. Jan 15 23:49:02.372999 kubelet[2945]: E0115 23:49:02.372951 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:02.383658 systemd[1]: Created slice kubepods-burstable-podf96ffbbed867f9d600d5bd20dafdfd33.slice - libcontainer container kubepods-burstable-podf96ffbbed867f9d600d5bd20dafdfd33.slice. Jan 15 23:49:02.389256 kubelet[2945]: E0115 23:49:02.389211 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:02.395196 systemd[1]: Created slice kubepods-burstable-pode6fece6bb6c169bef741067adf0a4378.slice - libcontainer container kubepods-burstable-pode6fece6bb6c169bef741067adf0a4378.slice. Jan 15 23:49:02.402353 kubelet[2945]: E0115 23:49:02.402233 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:02.404101 kubelet[2945]: I0115 23:49:02.404029 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:02.404101 kubelet[2945]: I0115 23:49:02.404096 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:02.404280 kubelet[2945]: I0115 23:49:02.404137 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:02.404280 kubelet[2945]: I0115 23:49:02.404172 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:02.407002 kubelet[2945]: E0115 23:49:02.406920 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-91?timeout=10s\": dial tcp 172.31.28.91:6443: connect: connection refused" interval="400ms" Jan 15 23:49:02.425977 kubelet[2945]: I0115 23:49:02.425933 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-91" Jan 15 23:49:02.427207 kubelet[2945]: E0115 23:49:02.427159 2945 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.91:6443/api/v1/nodes\": dial tcp 172.31.28.91:6443: connect: connection refused" node="ip-172-31-28-91" Jan 15 23:49:02.505046 kubelet[2945]: I0115 23:49:02.504411 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f96ffbbed867f9d600d5bd20dafdfd33-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-91\" (UID: \"f96ffbbed867f9d600d5bd20dafdfd33\") " pod="kube-system/kube-scheduler-ip-172-31-28-91" Jan 15 23:49:02.505046 kubelet[2945]: I0115 23:49:02.504679 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6fece6bb6c169bef741067adf0a4378-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-91\" (UID: \"e6fece6bb6c169bef741067adf0a4378\") " pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:02.505046 kubelet[2945]: I0115 23:49:02.504726 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:02.505046 kubelet[2945]: I0115 23:49:02.504768 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6fece6bb6c169bef741067adf0a4378-ca-certs\") pod \"kube-apiserver-ip-172-31-28-91\" (UID: \"e6fece6bb6c169bef741067adf0a4378\") " pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:02.505046 kubelet[2945]: I0115 23:49:02.504804 2945 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6fece6bb6c169bef741067adf0a4378-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-91\" (UID: \"e6fece6bb6c169bef741067adf0a4378\") " pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:02.630107 kubelet[2945]: I0115 23:49:02.629998 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-91" Jan 15 23:49:02.631829 kubelet[2945]: E0115 23:49:02.631709 2945 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.91:6443/api/v1/nodes\": dial tcp 172.31.28.91:6443: connect: connection refused" node="ip-172-31-28-91" Jan 15 23:49:02.675255 containerd[1995]: time="2026-01-15T23:49:02.675177759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-91,Uid:fc5687e9763800aa4bdb61d16feb9a93,Namespace:kube-system,Attempt:0,}" Jan 15 23:49:02.693536 containerd[1995]: time="2026-01-15T23:49:02.693137236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-91,Uid:f96ffbbed867f9d600d5bd20dafdfd33,Namespace:kube-system,Attempt:0,}" Jan 15 23:49:02.715665 containerd[1995]: time="2026-01-15T23:49:02.715584748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-91,Uid:e6fece6bb6c169bef741067adf0a4378,Namespace:kube-system,Attempt:0,}" Jan 15 23:49:02.722751 containerd[1995]: time="2026-01-15T23:49:02.722673688Z" level=info msg="connecting to shim 5e597a1a87f1a7425671cb76e8d08cf68072d7dbe3a7aee8a47f527f52867434" address="unix:///run/containerd/s/ceac1c5dbf6a58743a791ec89a97bf4c3eb6c9c941b96233e06dc41b1dc079aa" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:49:02.768957 containerd[1995]: time="2026-01-15T23:49:02.768905128Z" level=info msg="connecting to shim f48103fc8f9c7c4749ea7692646dd4ce70d36d09b114f5095ad7de9d0763101e" address="unix:///run/containerd/s/4112be832d09547c72c5dfb038e5a590b4d75674dcc835c7534d7438d7a0f3ba" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:49:02.784796 systemd[1]: Started cri-containerd-5e597a1a87f1a7425671cb76e8d08cf68072d7dbe3a7aee8a47f527f52867434.scope - libcontainer container 5e597a1a87f1a7425671cb76e8d08cf68072d7dbe3a7aee8a47f527f52867434. Jan 15 23:49:02.809294 kubelet[2945]: E0115 23:49:02.809197 2945 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-91?timeout=10s\": dial tcp 172.31.28.91:6443: connect: connection refused" interval="800ms" Jan 15 23:49:02.816248 containerd[1995]: time="2026-01-15T23:49:02.816165184Z" level=info msg="connecting to shim 1485ee75143f6158bf7b19be20c2feba74e76dad88e146eb171bfa57103d9d6f" address="unix:///run/containerd/s/be5d16c03eaac86b1afbc25a821ebeb83ac22a123952d072cd4a413b3aef9bf9" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:49:02.854523 systemd[1]: Started cri-containerd-f48103fc8f9c7c4749ea7692646dd4ce70d36d09b114f5095ad7de9d0763101e.scope - libcontainer container f48103fc8f9c7c4749ea7692646dd4ce70d36d09b114f5095ad7de9d0763101e. Jan 15 23:49:02.887822 systemd[1]: Started cri-containerd-1485ee75143f6158bf7b19be20c2feba74e76dad88e146eb171bfa57103d9d6f.scope - libcontainer container 1485ee75143f6158bf7b19be20c2feba74e76dad88e146eb171bfa57103d9d6f. Jan 15 23:49:02.968489 containerd[1995]: time="2026-01-15T23:49:02.968021141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-91,Uid:fc5687e9763800aa4bdb61d16feb9a93,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e597a1a87f1a7425671cb76e8d08cf68072d7dbe3a7aee8a47f527f52867434\"" Jan 15 23:49:02.979985 containerd[1995]: time="2026-01-15T23:49:02.979920161Z" level=info msg="CreateContainer within sandbox \"5e597a1a87f1a7425671cb76e8d08cf68072d7dbe3a7aee8a47f527f52867434\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 15 23:49:03.007104 containerd[1995]: time="2026-01-15T23:49:03.006933529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-91,Uid:f96ffbbed867f9d600d5bd20dafdfd33,Namespace:kube-system,Attempt:0,} returns sandbox id \"f48103fc8f9c7c4749ea7692646dd4ce70d36d09b114f5095ad7de9d0763101e\"" Jan 15 23:49:03.008515 containerd[1995]: time="2026-01-15T23:49:03.007437025Z" level=info msg="Container 0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:49:03.014301 containerd[1995]: time="2026-01-15T23:49:03.014252953Z" level=info msg="CreateContainer within sandbox \"f48103fc8f9c7c4749ea7692646dd4ce70d36d09b114f5095ad7de9d0763101e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 15 23:49:03.025489 containerd[1995]: time="2026-01-15T23:49:03.025417921Z" level=info msg="CreateContainer within sandbox \"5e597a1a87f1a7425671cb76e8d08cf68072d7dbe3a7aee8a47f527f52867434\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d\"" Jan 15 23:49:03.027909 containerd[1995]: time="2026-01-15T23:49:03.027859153Z" level=info msg="StartContainer for \"0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d\"" Jan 15 23:49:03.030624 containerd[1995]: time="2026-01-15T23:49:03.030573985Z" level=info msg="connecting to shim 0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d" address="unix:///run/containerd/s/ceac1c5dbf6a58743a791ec89a97bf4c3eb6c9c941b96233e06dc41b1dc079aa" protocol=ttrpc version=3 Jan 15 23:49:03.035159 containerd[1995]: time="2026-01-15T23:49:03.035081557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-91,Uid:e6fece6bb6c169bef741067adf0a4378,Namespace:kube-system,Attempt:0,} returns sandbox id \"1485ee75143f6158bf7b19be20c2feba74e76dad88e146eb171bfa57103d9d6f\"" Jan 15 23:49:03.036788 kubelet[2945]: I0115 23:49:03.036353 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-91" Jan 15 23:49:03.037006 kubelet[2945]: E0115 23:49:03.036942 2945 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.91:6443/api/v1/nodes\": dial tcp 172.31.28.91:6443: connect: connection refused" node="ip-172-31-28-91" Jan 15 23:49:03.041502 containerd[1995]: time="2026-01-15T23:49:03.041418853Z" level=info msg="CreateContainer within sandbox \"1485ee75143f6158bf7b19be20c2feba74e76dad88e146eb171bfa57103d9d6f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 15 23:49:03.047425 containerd[1995]: time="2026-01-15T23:49:03.047356057Z" level=info msg="Container 22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:49:03.070498 containerd[1995]: time="2026-01-15T23:49:03.068253313Z" level=info msg="Container a08798b393fd3110f3a40e112d5ce6d0202f58add9991dc595de1dd8b3291d20: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:49:03.074894 containerd[1995]: time="2026-01-15T23:49:03.074408689Z" level=info msg="CreateContainer within sandbox \"f48103fc8f9c7c4749ea7692646dd4ce70d36d09b114f5095ad7de9d0763101e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77\"" Jan 15 23:49:03.076787 systemd[1]: Started cri-containerd-0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d.scope - libcontainer container 0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d. Jan 15 23:49:03.080160 containerd[1995]: time="2026-01-15T23:49:03.080083213Z" level=info msg="StartContainer for \"22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77\"" Jan 15 23:49:03.098610 containerd[1995]: time="2026-01-15T23:49:03.098220026Z" level=info msg="connecting to shim 22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77" address="unix:///run/containerd/s/4112be832d09547c72c5dfb038e5a590b4d75674dcc835c7534d7438d7a0f3ba" protocol=ttrpc version=3 Jan 15 23:49:03.110500 containerd[1995]: time="2026-01-15T23:49:03.110305910Z" level=info msg="CreateContainer within sandbox \"1485ee75143f6158bf7b19be20c2feba74e76dad88e146eb171bfa57103d9d6f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a08798b393fd3110f3a40e112d5ce6d0202f58add9991dc595de1dd8b3291d20\"" Jan 15 23:49:03.111332 containerd[1995]: time="2026-01-15T23:49:03.111276974Z" level=info msg="StartContainer for \"a08798b393fd3110f3a40e112d5ce6d0202f58add9991dc595de1dd8b3291d20\"" Jan 15 23:49:03.114046 containerd[1995]: time="2026-01-15T23:49:03.113882966Z" level=info msg="connecting to shim a08798b393fd3110f3a40e112d5ce6d0202f58add9991dc595de1dd8b3291d20" address="unix:///run/containerd/s/be5d16c03eaac86b1afbc25a821ebeb83ac22a123952d072cd4a413b3aef9bf9" protocol=ttrpc version=3 Jan 15 23:49:03.167838 systemd[1]: Started cri-containerd-22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77.scope - libcontainer container 22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77. Jan 15 23:49:03.176909 systemd[1]: Started cri-containerd-a08798b393fd3110f3a40e112d5ce6d0202f58add9991dc595de1dd8b3291d20.scope - libcontainer container a08798b393fd3110f3a40e112d5ce6d0202f58add9991dc595de1dd8b3291d20. Jan 15 23:49:03.214792 containerd[1995]: time="2026-01-15T23:49:03.214710458Z" level=info msg="StartContainer for \"0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d\" returns successfully" Jan 15 23:49:03.271985 kubelet[2945]: E0115 23:49:03.271852 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:03.351534 containerd[1995]: time="2026-01-15T23:49:03.351076803Z" level=info msg="StartContainer for \"22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77\" returns successfully" Jan 15 23:49:03.361792 containerd[1995]: time="2026-01-15T23:49:03.361737531Z" level=info msg="StartContainer for \"a08798b393fd3110f3a40e112d5ce6d0202f58add9991dc595de1dd8b3291d20\" returns successfully" Jan 15 23:49:03.480498 kubelet[2945]: W0115 23:49:03.480116 2945 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.28.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.91:6443: connect: connection refused Jan 15 23:49:03.481284 kubelet[2945]: E0115 23:49:03.480440 2945 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.28.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.91:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:49:03.533843 kubelet[2945]: W0115 23:49:03.533756 2945 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.28.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.91:6443: connect: connection refused Jan 15 23:49:03.533987 kubelet[2945]: E0115 23:49:03.533856 2945 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.28.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.91:6443: connect: connection refused" logger="UnhandledError" Jan 15 23:49:03.842112 kubelet[2945]: I0115 23:49:03.841991 2945 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-91" Jan 15 23:49:04.282626 kubelet[2945]: E0115 23:49:04.282590 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:04.292370 kubelet[2945]: E0115 23:49:04.292337 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:05.295045 kubelet[2945]: E0115 23:49:05.294993 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:05.295664 kubelet[2945]: E0115 23:49:05.295006 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:06.298377 kubelet[2945]: E0115 23:49:06.298276 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:06.299817 kubelet[2945]: E0115 23:49:06.298279 2945 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:08.324764 kubelet[2945]: E0115 23:49:08.324682 2945 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-91\" not found" node="ip-172-31-28-91" Jan 15 23:49:08.511075 kubelet[2945]: I0115 23:49:08.511001 2945 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-91" Jan 15 23:49:08.606598 kubelet[2945]: I0115 23:49:08.606128 2945 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:08.630000 kubelet[2945]: E0115 23:49:08.629917 2945 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-91\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:08.630297 kubelet[2945]: I0115 23:49:08.630084 2945 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-91" Jan 15 23:49:08.637310 kubelet[2945]: E0115 23:49:08.636893 2945 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-91\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-91" Jan 15 23:49:08.637310 kubelet[2945]: I0115 23:49:08.636944 2945 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:08.651266 kubelet[2945]: E0115 23:49:08.651185 2945 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-91\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:09.169293 kubelet[2945]: I0115 23:49:09.169248 2945 apiserver.go:52] "Watching apiserver" Jan 15 23:49:09.203744 kubelet[2945]: I0115 23:49:09.203577 2945 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 23:49:09.531320 kubelet[2945]: I0115 23:49:09.531160 2945 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:09.578225 update_engine[1980]: I20260115 23:49:09.578015 1980 update_attempter.cc:509] Updating boot flags... Jan 15 23:49:11.317827 systemd[1]: Reload requested from client PID 3487 ('systemctl') (unit session-7.scope)... Jan 15 23:49:11.317877 systemd[1]: Reloading... Jan 15 23:49:11.611563 zram_generator::config[3534]: No configuration found. Jan 15 23:49:11.825566 kubelet[2945]: I0115 23:49:11.824383 2945 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-91" Jan 15 23:49:12.115014 systemd[1]: Reloading finished in 795 ms. Jan 15 23:49:12.179440 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:49:12.197125 systemd[1]: kubelet.service: Deactivated successfully. Jan 15 23:49:12.197623 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:49:12.197718 systemd[1]: kubelet.service: Consumed 1.712s CPU time, 128.6M memory peak. Jan 15 23:49:12.202105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 15 23:49:12.581572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 15 23:49:12.602137 (kubelet)[3591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 15 23:49:12.696832 kubelet[3591]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:49:12.696832 kubelet[3591]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 15 23:49:12.696832 kubelet[3591]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 15 23:49:12.697393 kubelet[3591]: I0115 23:49:12.696895 3591 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 15 23:49:12.712231 kubelet[3591]: I0115 23:49:12.712170 3591 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 15 23:49:12.712231 kubelet[3591]: I0115 23:49:12.712220 3591 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 15 23:49:12.712818 kubelet[3591]: I0115 23:49:12.712720 3591 server.go:954] "Client rotation is on, will bootstrap in background" Jan 15 23:49:12.720712 kubelet[3591]: I0115 23:49:12.720563 3591 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 15 23:49:12.726517 kubelet[3591]: I0115 23:49:12.726330 3591 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 15 23:49:12.736028 kubelet[3591]: I0115 23:49:12.735982 3591 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 15 23:49:12.744064 kubelet[3591]: I0115 23:49:12.743565 3591 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 15 23:49:12.745279 kubelet[3591]: I0115 23:49:12.744717 3591 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 15 23:49:12.745812 kubelet[3591]: I0115 23:49:12.745441 3591 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-91","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 15 23:49:12.746057 kubelet[3591]: I0115 23:49:12.746036 3591 topology_manager.go:138] "Creating topology manager with none policy" Jan 15 23:49:12.746155 kubelet[3591]: I0115 23:49:12.746138 3591 container_manager_linux.go:304] "Creating device plugin manager" Jan 15 23:49:12.746325 kubelet[3591]: I0115 23:49:12.746306 3591 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:49:12.746811 kubelet[3591]: I0115 23:49:12.746736 3591 kubelet.go:446] "Attempting to sync node with API server" Jan 15 23:49:12.746811 kubelet[3591]: I0115 23:49:12.746858 3591 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 15 23:49:12.746811 kubelet[3591]: I0115 23:49:12.746921 3591 kubelet.go:352] "Adding apiserver pod source" Jan 15 23:49:12.746811 kubelet[3591]: I0115 23:49:12.746953 3591 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 15 23:49:12.749886 kubelet[3591]: I0115 23:49:12.749834 3591 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 15 23:49:12.751153 kubelet[3591]: I0115 23:49:12.750656 3591 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 15 23:49:12.751834 kubelet[3591]: I0115 23:49:12.751782 3591 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 15 23:49:12.751974 kubelet[3591]: I0115 23:49:12.751847 3591 server.go:1287] "Started kubelet" Jan 15 23:49:12.762048 kubelet[3591]: I0115 23:49:12.761672 3591 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 15 23:49:12.775662 kubelet[3591]: I0115 23:49:12.775579 3591 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 15 23:49:12.782505 kubelet[3591]: I0115 23:49:12.781519 3591 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 15 23:49:12.786497 kubelet[3591]: I0115 23:49:12.784922 3591 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 15 23:49:12.799509 kubelet[3591]: I0115 23:49:12.798630 3591 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 15 23:49:12.801881 kubelet[3591]: I0115 23:49:12.800270 3591 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 15 23:49:12.803394 kubelet[3591]: E0115 23:49:12.802383 3591 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-91\" not found" Jan 15 23:49:12.817113 kubelet[3591]: I0115 23:49:12.817080 3591 server.go:479] "Adding debug handlers to kubelet server" Jan 15 23:49:12.836717 kubelet[3591]: I0115 23:49:12.836594 3591 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 15 23:49:12.837767 kubelet[3591]: I0115 23:49:12.837741 3591 reconciler.go:26] "Reconciler: start to sync state" Jan 15 23:49:12.843585 kubelet[3591]: I0115 23:49:12.843512 3591 factory.go:221] Registration of the systemd container factory successfully Jan 15 23:49:12.845127 kubelet[3591]: I0115 23:49:12.844959 3591 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 15 23:49:12.850635 kubelet[3591]: I0115 23:49:12.850581 3591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 15 23:49:12.854218 kubelet[3591]: I0115 23:49:12.854176 3591 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 15 23:49:12.854558 kubelet[3591]: I0115 23:49:12.854368 3591 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 15 23:49:12.854558 kubelet[3591]: I0115 23:49:12.854404 3591 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 15 23:49:12.854935 kubelet[3591]: I0115 23:49:12.854619 3591 kubelet.go:2382] "Starting kubelet main sync loop" Jan 15 23:49:12.854935 kubelet[3591]: E0115 23:49:12.854823 3591 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 15 23:49:12.877671 kubelet[3591]: E0115 23:49:12.877547 3591 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 15 23:49:12.880160 kubelet[3591]: I0115 23:49:12.878645 3591 factory.go:221] Registration of the containerd container factory successfully Jan 15 23:49:12.957147 kubelet[3591]: E0115 23:49:12.957084 3591 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 15 23:49:12.992661 kubelet[3591]: I0115 23:49:12.991242 3591 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 15 23:49:12.993642 kubelet[3591]: I0115 23:49:12.993603 3591 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 15 23:49:12.993933 kubelet[3591]: I0115 23:49:12.993912 3591 state_mem.go:36] "Initialized new in-memory state store" Jan 15 23:49:12.995021 kubelet[3591]: I0115 23:49:12.994985 3591 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 15 23:49:12.995493 kubelet[3591]: I0115 23:49:12.995256 3591 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 15 23:49:12.995621 kubelet[3591]: I0115 23:49:12.995603 3591 policy_none.go:49] "None policy: Start" Jan 15 23:49:12.995725 kubelet[3591]: I0115 23:49:12.995707 3591 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 15 23:49:12.995929 kubelet[3591]: I0115 23:49:12.995910 3591 state_mem.go:35] "Initializing new in-memory state store" Jan 15 23:49:12.997943 kubelet[3591]: I0115 23:49:12.996383 3591 state_mem.go:75] "Updated machine memory state" Jan 15 23:49:13.020845 kubelet[3591]: I0115 23:49:13.020811 3591 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 15 23:49:13.022140 kubelet[3591]: I0115 23:49:13.022111 3591 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 15 23:49:13.024106 kubelet[3591]: I0115 23:49:13.023923 3591 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 15 23:49:13.025194 kubelet[3591]: I0115 23:49:13.025167 3591 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 15 23:49:13.037957 kubelet[3591]: E0115 23:49:13.037431 3591 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 15 23:49:13.152087 kubelet[3591]: I0115 23:49:13.151957 3591 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-91" Jan 15 23:49:13.159516 kubelet[3591]: I0115 23:49:13.159367 3591 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:13.160940 kubelet[3591]: I0115 23:49:13.160418 3591 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:13.162021 kubelet[3591]: I0115 23:49:13.161530 3591 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-91" Jan 15 23:49:13.182657 kubelet[3591]: E0115 23:49:13.182618 3591 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-91\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-91" Jan 15 23:49:13.182965 kubelet[3591]: E0115 23:49:13.182939 3591 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-91\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:13.183236 kubelet[3591]: I0115 23:49:13.183213 3591 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-91" Jan 15 23:49:13.183408 kubelet[3591]: I0115 23:49:13.183390 3591 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-91" Jan 15 23:49:13.241273 kubelet[3591]: I0115 23:49:13.240739 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:13.241273 kubelet[3591]: I0115 23:49:13.240811 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:13.241273 kubelet[3591]: I0115 23:49:13.240853 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:13.241273 kubelet[3591]: I0115 23:49:13.240899 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f96ffbbed867f9d600d5bd20dafdfd33-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-91\" (UID: \"f96ffbbed867f9d600d5bd20dafdfd33\") " pod="kube-system/kube-scheduler-ip-172-31-28-91" Jan 15 23:49:13.241273 kubelet[3591]: I0115 23:49:13.240975 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6fece6bb6c169bef741067adf0a4378-ca-certs\") pod \"kube-apiserver-ip-172-31-28-91\" (UID: \"e6fece6bb6c169bef741067adf0a4378\") " pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:13.241667 kubelet[3591]: I0115 23:49:13.241014 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6fece6bb6c169bef741067adf0a4378-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-91\" (UID: \"e6fece6bb6c169bef741067adf0a4378\") " pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:13.241667 kubelet[3591]: I0115 23:49:13.241048 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6fece6bb6c169bef741067adf0a4378-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-91\" (UID: \"e6fece6bb6c169bef741067adf0a4378\") " pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:13.241667 kubelet[3591]: I0115 23:49:13.241091 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:13.241667 kubelet[3591]: I0115 23:49:13.241131 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc5687e9763800aa4bdb61d16feb9a93-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-91\" (UID: \"fc5687e9763800aa4bdb61d16feb9a93\") " pod="kube-system/kube-controller-manager-ip-172-31-28-91" Jan 15 23:49:13.766552 kubelet[3591]: I0115 23:49:13.766493 3591 apiserver.go:52] "Watching apiserver" Jan 15 23:49:13.838138 kubelet[3591]: I0115 23:49:13.838064 3591 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 15 23:49:13.931833 kubelet[3591]: I0115 23:49:13.931790 3591 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:13.943819 kubelet[3591]: E0115 23:49:13.943767 3591 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-91\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-91" Jan 15 23:49:14.006823 kubelet[3591]: I0115 23:49:14.006549 3591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-91" podStartSLOduration=3.0065127 podStartE2EDuration="3.0065127s" podCreationTimestamp="2026-01-15 23:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:49:14.005797896 +0000 UTC m=+1.392354800" watchObservedRunningTime="2026-01-15 23:49:14.0065127 +0000 UTC m=+1.393069592" Jan 15 23:49:14.087741 kubelet[3591]: I0115 23:49:14.087067 3591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-91" podStartSLOduration=1.087041856 podStartE2EDuration="1.087041856s" podCreationTimestamp="2026-01-15 23:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:49:14.037765272 +0000 UTC m=+1.424322188" watchObservedRunningTime="2026-01-15 23:49:14.087041856 +0000 UTC m=+1.473598772" Jan 15 23:49:14.603438 kubelet[3591]: I0115 23:49:14.603201 3591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-91" podStartSLOduration=5.603174987 podStartE2EDuration="5.603174987s" podCreationTimestamp="2026-01-15 23:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:49:14.0878508 +0000 UTC m=+1.474407728" watchObservedRunningTime="2026-01-15 23:49:14.603174987 +0000 UTC m=+1.989731999" Jan 15 23:49:15.943681 kubelet[3591]: I0115 23:49:15.943456 3591 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 15 23:49:15.944728 containerd[1995]: time="2026-01-15T23:49:15.944638925Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 15 23:49:15.945276 kubelet[3591]: I0115 23:49:15.945022 3591 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 15 23:49:16.714567 systemd[1]: Created slice kubepods-besteffort-pod08c5cf0e_0e69_4fed_86c5_070f40a0c25e.slice - libcontainer container kubepods-besteffort-pod08c5cf0e_0e69_4fed_86c5_070f40a0c25e.slice. Jan 15 23:49:16.764265 kubelet[3591]: I0115 23:49:16.764129 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08c5cf0e-0e69-4fed-86c5-070f40a0c25e-xtables-lock\") pod \"kube-proxy-kdhrf\" (UID: \"08c5cf0e-0e69-4fed-86c5-070f40a0c25e\") " pod="kube-system/kube-proxy-kdhrf" Jan 15 23:49:16.764265 kubelet[3591]: I0115 23:49:16.764219 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08c5cf0e-0e69-4fed-86c5-070f40a0c25e-lib-modules\") pod \"kube-proxy-kdhrf\" (UID: \"08c5cf0e-0e69-4fed-86c5-070f40a0c25e\") " pod="kube-system/kube-proxy-kdhrf" Jan 15 23:49:16.764574 kubelet[3591]: I0115 23:49:16.764551 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/08c5cf0e-0e69-4fed-86c5-070f40a0c25e-kube-proxy\") pod \"kube-proxy-kdhrf\" (UID: \"08c5cf0e-0e69-4fed-86c5-070f40a0c25e\") " pod="kube-system/kube-proxy-kdhrf" Jan 15 23:49:16.764791 kubelet[3591]: I0115 23:49:16.764681 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqkx\" (UniqueName: \"kubernetes.io/projected/08c5cf0e-0e69-4fed-86c5-070f40a0c25e-kube-api-access-rwqkx\") pod \"kube-proxy-kdhrf\" (UID: \"08c5cf0e-0e69-4fed-86c5-070f40a0c25e\") " pod="kube-system/kube-proxy-kdhrf" Jan 15 23:49:17.031616 containerd[1995]: time="2026-01-15T23:49:17.030602271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdhrf,Uid:08c5cf0e-0e69-4fed-86c5-070f40a0c25e,Namespace:kube-system,Attempt:0,}" Jan 15 23:49:17.063241 systemd[1]: Created slice kubepods-besteffort-pod2b1ca60f_182a_409e_9608_7b4fcc9a8200.slice - libcontainer container kubepods-besteffort-pod2b1ca60f_182a_409e_9608_7b4fcc9a8200.slice. Jan 15 23:49:17.067836 kubelet[3591]: I0115 23:49:17.067745 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b1ca60f-182a-409e-9608-7b4fcc9a8200-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cgwrg\" (UID: \"2b1ca60f-182a-409e-9608-7b4fcc9a8200\") " pod="tigera-operator/tigera-operator-7dcd859c48-cgwrg" Jan 15 23:49:17.069595 kubelet[3591]: I0115 23:49:17.067816 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mssm\" (UniqueName: \"kubernetes.io/projected/2b1ca60f-182a-409e-9608-7b4fcc9a8200-kube-api-access-9mssm\") pod \"tigera-operator-7dcd859c48-cgwrg\" (UID: \"2b1ca60f-182a-409e-9608-7b4fcc9a8200\") " pod="tigera-operator/tigera-operator-7dcd859c48-cgwrg" Jan 15 23:49:17.098768 containerd[1995]: time="2026-01-15T23:49:17.098696631Z" level=info msg="connecting to shim 6c94f2a964926ac9aa27ac0890bbf80ec766b87cd5ac054efe28d33794af0a29" address="unix:///run/containerd/s/4c32321213cd5214fb65c649408d0d660c24451bf9cd1c85d27aabd0a02a74c1" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:49:17.178766 systemd[1]: Started cri-containerd-6c94f2a964926ac9aa27ac0890bbf80ec766b87cd5ac054efe28d33794af0a29.scope - libcontainer container 6c94f2a964926ac9aa27ac0890bbf80ec766b87cd5ac054efe28d33794af0a29. Jan 15 23:49:17.242634 containerd[1995]: time="2026-01-15T23:49:17.242582908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kdhrf,Uid:08c5cf0e-0e69-4fed-86c5-070f40a0c25e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c94f2a964926ac9aa27ac0890bbf80ec766b87cd5ac054efe28d33794af0a29\"" Jan 15 23:49:17.249216 containerd[1995]: time="2026-01-15T23:49:17.249159196Z" level=info msg="CreateContainer within sandbox \"6c94f2a964926ac9aa27ac0890bbf80ec766b87cd5ac054efe28d33794af0a29\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 15 23:49:17.272852 containerd[1995]: time="2026-01-15T23:49:17.272784748Z" level=info msg="Container d7d61935feba75e612b61b4ff2b2df6f78af8cf82c24f3a696bf90b599c2db0f: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:49:17.295638 containerd[1995]: time="2026-01-15T23:49:17.295125088Z" level=info msg="CreateContainer within sandbox \"6c94f2a964926ac9aa27ac0890bbf80ec766b87cd5ac054efe28d33794af0a29\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d7d61935feba75e612b61b4ff2b2df6f78af8cf82c24f3a696bf90b599c2db0f\"" Jan 15 23:49:17.297924 containerd[1995]: time="2026-01-15T23:49:17.297881260Z" level=info msg="StartContainer for \"d7d61935feba75e612b61b4ff2b2df6f78af8cf82c24f3a696bf90b599c2db0f\"" Jan 15 23:49:17.300959 containerd[1995]: time="2026-01-15T23:49:17.300896248Z" level=info msg="connecting to shim d7d61935feba75e612b61b4ff2b2df6f78af8cf82c24f3a696bf90b599c2db0f" address="unix:///run/containerd/s/4c32321213cd5214fb65c649408d0d660c24451bf9cd1c85d27aabd0a02a74c1" protocol=ttrpc version=3 Jan 15 23:49:17.334797 systemd[1]: Started cri-containerd-d7d61935feba75e612b61b4ff2b2df6f78af8cf82c24f3a696bf90b599c2db0f.scope - libcontainer container d7d61935feba75e612b61b4ff2b2df6f78af8cf82c24f3a696bf90b599c2db0f. Jan 15 23:49:17.394393 containerd[1995]: time="2026-01-15T23:49:17.394343825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cgwrg,Uid:2b1ca60f-182a-409e-9608-7b4fcc9a8200,Namespace:tigera-operator,Attempt:0,}" Jan 15 23:49:17.461360 containerd[1995]: time="2026-01-15T23:49:17.461284541Z" level=info msg="connecting to shim 3543868b91a16ff9483260f2a5d5839292393ceba2bc2b3afb6c6d8713d99134" address="unix:///run/containerd/s/d95382073476066c472c074e23611e5062c1960b6f50f0ad382f6af9c337fd60" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:49:17.473668 containerd[1995]: time="2026-01-15T23:49:17.473610101Z" level=info msg="StartContainer for \"d7d61935feba75e612b61b4ff2b2df6f78af8cf82c24f3a696bf90b599c2db0f\" returns successfully" Jan 15 23:49:17.526897 systemd[1]: Started cri-containerd-3543868b91a16ff9483260f2a5d5839292393ceba2bc2b3afb6c6d8713d99134.scope - libcontainer container 3543868b91a16ff9483260f2a5d5839292393ceba2bc2b3afb6c6d8713d99134. Jan 15 23:49:17.658225 containerd[1995]: time="2026-01-15T23:49:17.657617454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cgwrg,Uid:2b1ca60f-182a-409e-9608-7b4fcc9a8200,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3543868b91a16ff9483260f2a5d5839292393ceba2bc2b3afb6c6d8713d99134\"" Jan 15 23:49:17.669534 containerd[1995]: time="2026-01-15T23:49:17.668194230Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 15 23:49:17.893347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551418910.mount: Deactivated successfully. Jan 15 23:49:18.700494 kubelet[3591]: I0115 23:49:18.700181 3591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kdhrf" podStartSLOduration=2.700155739 podStartE2EDuration="2.700155739s" podCreationTimestamp="2026-01-15 23:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:49:17.975692647 +0000 UTC m=+5.362249587" watchObservedRunningTime="2026-01-15 23:49:18.700155739 +0000 UTC m=+6.086712643" Jan 15 23:49:20.069315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327372446.mount: Deactivated successfully. Jan 15 23:49:22.136222 containerd[1995]: time="2026-01-15T23:49:22.136143296Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:22.138404 containerd[1995]: time="2026-01-15T23:49:22.138028100Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 15 23:49:22.140714 containerd[1995]: time="2026-01-15T23:49:22.140654348Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:22.145440 containerd[1995]: time="2026-01-15T23:49:22.145371584Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:22.146973 containerd[1995]: time="2026-01-15T23:49:22.146925068Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 4.478673202s" Jan 15 23:49:22.147127 containerd[1995]: time="2026-01-15T23:49:22.147099344Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 15 23:49:22.151570 containerd[1995]: time="2026-01-15T23:49:22.151338104Z" level=info msg="CreateContainer within sandbox \"3543868b91a16ff9483260f2a5d5839292393ceba2bc2b3afb6c6d8713d99134\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 15 23:49:22.174198 containerd[1995]: time="2026-01-15T23:49:22.169352360Z" level=info msg="Container bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:49:22.185156 containerd[1995]: time="2026-01-15T23:49:22.185079320Z" level=info msg="CreateContainer within sandbox \"3543868b91a16ff9483260f2a5d5839292393ceba2bc2b3afb6c6d8713d99134\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726\"" Jan 15 23:49:22.185878 containerd[1995]: time="2026-01-15T23:49:22.185792492Z" level=info msg="StartContainer for \"bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726\"" Jan 15 23:49:22.188762 containerd[1995]: time="2026-01-15T23:49:22.188537012Z" level=info msg="connecting to shim bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726" address="unix:///run/containerd/s/d95382073476066c472c074e23611e5062c1960b6f50f0ad382f6af9c337fd60" protocol=ttrpc version=3 Jan 15 23:49:22.223749 systemd[1]: Started cri-containerd-bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726.scope - libcontainer container bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726. Jan 15 23:49:22.284979 containerd[1995]: time="2026-01-15T23:49:22.284908917Z" level=info msg="StartContainer for \"bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726\" returns successfully" Jan 15 23:49:30.709951 sudo[2363]: pam_unix(sudo:session): session closed for user root Jan 15 23:49:30.792500 sshd[2362]: Connection closed by 20.161.92.111 port 57246 Jan 15 23:49:30.791502 sshd-session[2359]: pam_unix(sshd:session): session closed for user core Jan 15 23:49:30.802541 systemd[1]: sshd@6-172.31.28.91:22-20.161.92.111:57246.service: Deactivated successfully. Jan 15 23:49:30.811197 systemd[1]: session-7.scope: Deactivated successfully. Jan 15 23:49:30.812646 systemd[1]: session-7.scope: Consumed 12.175s CPU time, 221.9M memory peak. Jan 15 23:49:30.820048 systemd-logind[1976]: Session 7 logged out. Waiting for processes to exit. Jan 15 23:49:30.828195 systemd-logind[1976]: Removed session 7. Jan 15 23:49:46.470581 kubelet[3591]: I0115 23:49:46.470449 3591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cgwrg" podStartSLOduration=25.988068451 podStartE2EDuration="30.470257269s" podCreationTimestamp="2026-01-15 23:49:16 +0000 UTC" firstStartedPulling="2026-01-15 23:49:17.66618249 +0000 UTC m=+5.052739394" lastFinishedPulling="2026-01-15 23:49:22.14837132 +0000 UTC m=+9.534928212" observedRunningTime="2026-01-15 23:49:22.9871854 +0000 UTC m=+10.373742328" watchObservedRunningTime="2026-01-15 23:49:46.470257269 +0000 UTC m=+33.856814173" Jan 15 23:49:46.511732 systemd[1]: Created slice kubepods-besteffort-pod3de4ca59_f692_4220_ac4d_075ba293f0c9.slice - libcontainer container kubepods-besteffort-pod3de4ca59_f692_4220_ac4d_075ba293f0c9.slice. Jan 15 23:49:46.577113 kubelet[3591]: I0115 23:49:46.576844 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt28h\" (UniqueName: \"kubernetes.io/projected/3de4ca59-f692-4220-ac4d-075ba293f0c9-kube-api-access-kt28h\") pod \"calico-typha-7598b6d7-899zf\" (UID: \"3de4ca59-f692-4220-ac4d-075ba293f0c9\") " pod="calico-system/calico-typha-7598b6d7-899zf" Jan 15 23:49:46.577113 kubelet[3591]: I0115 23:49:46.576957 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3de4ca59-f692-4220-ac4d-075ba293f0c9-tigera-ca-bundle\") pod \"calico-typha-7598b6d7-899zf\" (UID: \"3de4ca59-f692-4220-ac4d-075ba293f0c9\") " pod="calico-system/calico-typha-7598b6d7-899zf" Jan 15 23:49:46.577113 kubelet[3591]: I0115 23:49:46.577000 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3de4ca59-f692-4220-ac4d-075ba293f0c9-typha-certs\") pod \"calico-typha-7598b6d7-899zf\" (UID: \"3de4ca59-f692-4220-ac4d-075ba293f0c9\") " pod="calico-system/calico-typha-7598b6d7-899zf" Jan 15 23:49:46.680542 systemd[1]: Created slice kubepods-besteffort-pod9db4c8d7_8659_49d3_88ff_b928c96c15e9.slice - libcontainer container kubepods-besteffort-pod9db4c8d7_8659_49d3_88ff_b928c96c15e9.slice. Jan 15 23:49:46.778781 kubelet[3591]: I0115 23:49:46.778540 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9db4c8d7-8659-49d3-88ff-b928c96c15e9-lib-modules\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.778958 kubelet[3591]: I0115 23:49:46.778870 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9db4c8d7-8659-49d3-88ff-b928c96c15e9-var-run-calico\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.779639 kubelet[3591]: I0115 23:49:46.779443 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9db4c8d7-8659-49d3-88ff-b928c96c15e9-cni-bin-dir\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.780527 kubelet[3591]: I0115 23:49:46.780053 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9db4c8d7-8659-49d3-88ff-b928c96c15e9-cni-net-dir\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.780663 kubelet[3591]: I0115 23:49:46.780582 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9db4c8d7-8659-49d3-88ff-b928c96c15e9-var-lib-calico\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.780663 kubelet[3591]: I0115 23:49:46.780628 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9db4c8d7-8659-49d3-88ff-b928c96c15e9-policysync\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.780782 kubelet[3591]: I0115 23:49:46.780670 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9db4c8d7-8659-49d3-88ff-b928c96c15e9-tigera-ca-bundle\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.780782 kubelet[3591]: I0115 23:49:46.780715 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9db4c8d7-8659-49d3-88ff-b928c96c15e9-xtables-lock\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.780782 kubelet[3591]: I0115 23:49:46.780750 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88vp\" (UniqueName: \"kubernetes.io/projected/9db4c8d7-8659-49d3-88ff-b928c96c15e9-kube-api-access-l88vp\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.780928 kubelet[3591]: I0115 23:49:46.780791 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9db4c8d7-8659-49d3-88ff-b928c96c15e9-cni-log-dir\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.780928 kubelet[3591]: I0115 23:49:46.780827 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9db4c8d7-8659-49d3-88ff-b928c96c15e9-node-certs\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.780928 kubelet[3591]: I0115 23:49:46.780864 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9db4c8d7-8659-49d3-88ff-b928c96c15e9-flexvol-driver-host\") pod \"calico-node-mldkc\" (UID: \"9db4c8d7-8659-49d3-88ff-b928c96c15e9\") " pod="calico-system/calico-node-mldkc" Jan 15 23:49:46.801666 kubelet[3591]: E0115 23:49:46.801032 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:49:46.834257 containerd[1995]: time="2026-01-15T23:49:46.833002631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7598b6d7-899zf,Uid:3de4ca59-f692-4220-ac4d-075ba293f0c9,Namespace:calico-system,Attempt:0,}" Jan 15 23:49:46.888289 kubelet[3591]: I0115 23:49:46.884928 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fb7073f-5e73-4607-9430-af7f999d9c94-kubelet-dir\") pod \"csi-node-driver-hscnf\" (UID: \"9fb7073f-5e73-4607-9430-af7f999d9c94\") " pod="calico-system/csi-node-driver-hscnf" Jan 15 23:49:46.888289 kubelet[3591]: I0115 23:49:46.887237 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9fb7073f-5e73-4607-9430-af7f999d9c94-registration-dir\") pod \"csi-node-driver-hscnf\" (UID: \"9fb7073f-5e73-4607-9430-af7f999d9c94\") " pod="calico-system/csi-node-driver-hscnf" Jan 15 23:49:46.888289 kubelet[3591]: I0115 23:49:46.887290 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvgns\" (UniqueName: \"kubernetes.io/projected/9fb7073f-5e73-4607-9430-af7f999d9c94-kube-api-access-lvgns\") pod \"csi-node-driver-hscnf\" (UID: \"9fb7073f-5e73-4607-9430-af7f999d9c94\") " pod="calico-system/csi-node-driver-hscnf" Jan 15 23:49:46.888289 kubelet[3591]: I0115 23:49:46.887435 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9fb7073f-5e73-4607-9430-af7f999d9c94-varrun\") pod \"csi-node-driver-hscnf\" (UID: \"9fb7073f-5e73-4607-9430-af7f999d9c94\") " pod="calico-system/csi-node-driver-hscnf" Jan 15 23:49:46.888289 kubelet[3591]: I0115 23:49:46.887846 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9fb7073f-5e73-4607-9430-af7f999d9c94-socket-dir\") pod \"csi-node-driver-hscnf\" (UID: \"9fb7073f-5e73-4607-9430-af7f999d9c94\") " pod="calico-system/csi-node-driver-hscnf" Jan 15 23:49:46.901142 kubelet[3591]: E0115 23:49:46.900723 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:46.901142 kubelet[3591]: W0115 23:49:46.900764 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:46.901142 kubelet[3591]: E0115 23:49:46.900800 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:46.919681 containerd[1995]: time="2026-01-15T23:49:46.919588967Z" level=info msg="connecting to shim 59539e5a1ea993ce7c3bb9d69279650cd7de6aa2a030f8a1b8d03d9ba642b35c" address="unix:///run/containerd/s/8cdfdd1f8d8611a2d4a7b54035452a2b918ce9207fa0067986267093b3be375d" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:49:46.922075 kubelet[3591]: E0115 23:49:46.921869 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:46.922075 kubelet[3591]: W0115 23:49:46.921911 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:46.922075 kubelet[3591]: E0115 23:49:46.921945 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:46.963770 kubelet[3591]: E0115 23:49:46.961904 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:46.963770 kubelet[3591]: W0115 23:49:46.961944 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:46.963770 kubelet[3591]: E0115 23:49:46.961981 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:46.990578 kubelet[3591]: E0115 23:49:46.990130 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:46.992119 kubelet[3591]: W0115 23:49:46.990767 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:46.992119 kubelet[3591]: E0115 23:49:46.991780 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:46.994793 kubelet[3591]: E0115 23:49:46.994746 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:46.994985 kubelet[3591]: W0115 23:49:46.994959 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:46.995321 kubelet[3591]: E0115 23:49:46.995280 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:46.997899 containerd[1995]: time="2026-01-15T23:49:46.997010340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mldkc,Uid:9db4c8d7-8659-49d3-88ff-b928c96c15e9,Namespace:calico-system,Attempt:0,}" Jan 15 23:49:46.998184 kubelet[3591]: E0115 23:49:46.998085 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:46.998184 kubelet[3591]: W0115 23:49:46.998121 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:46.998698 kubelet[3591]: E0115 23:49:46.998337 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.002254 kubelet[3591]: E0115 23:49:47.001653 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.002254 kubelet[3591]: W0115 23:49:47.001687 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.002602 kubelet[3591]: E0115 23:49:47.002533 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.003261 kubelet[3591]: E0115 23:49:47.002991 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.003261 kubelet[3591]: W0115 23:49:47.003186 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.003416 kubelet[3591]: E0115 23:49:47.003255 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.008038 kubelet[3591]: E0115 23:49:47.005910 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.008038 kubelet[3591]: W0115 23:49:47.007707 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.008038 kubelet[3591]: E0115 23:49:47.007855 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.008989 kubelet[3591]: E0115 23:49:47.008674 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.008989 kubelet[3591]: W0115 23:49:47.008704 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.008989 kubelet[3591]: E0115 23:49:47.008782 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.010613 kubelet[3591]: E0115 23:49:47.009969 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.011080 kubelet[3591]: W0115 23:49:47.011000 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.011667 kubelet[3591]: E0115 23:49:47.011579 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.015788 kubelet[3591]: E0115 23:49:47.015631 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.016394 kubelet[3591]: W0115 23:49:47.015988 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.016394 kubelet[3591]: E0115 23:49:47.016282 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.019981 kubelet[3591]: E0115 23:49:47.018543 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.019981 kubelet[3591]: W0115 23:49:47.018578 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.019981 kubelet[3591]: E0115 23:49:47.018644 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.020406 kubelet[3591]: E0115 23:49:47.020378 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.021647 kubelet[3591]: W0115 23:49:47.020657 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.024263 kubelet[3591]: E0115 23:49:47.022137 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.025114 kubelet[3591]: E0115 23:49:47.024794 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.025114 kubelet[3591]: W0115 23:49:47.024944 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.027241 kubelet[3591]: E0115 23:49:47.026563 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.027241 kubelet[3591]: W0115 23:49:47.027033 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.028535 kubelet[3591]: E0115 23:49:47.028047 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.028535 kubelet[3591]: E0115 23:49:47.028147 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.029948 kubelet[3591]: E0115 23:49:47.029637 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.029948 kubelet[3591]: W0115 23:49:47.029682 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.029948 kubelet[3591]: E0115 23:49:47.029755 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.032356 systemd[1]: Started cri-containerd-59539e5a1ea993ce7c3bb9d69279650cd7de6aa2a030f8a1b8d03d9ba642b35c.scope - libcontainer container 59539e5a1ea993ce7c3bb9d69279650cd7de6aa2a030f8a1b8d03d9ba642b35c. Jan 15 23:49:47.038936 kubelet[3591]: E0115 23:49:47.038727 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.038936 kubelet[3591]: W0115 23:49:47.038768 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.039137 kubelet[3591]: E0115 23:49:47.038838 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.041514 kubelet[3591]: E0115 23:49:47.040944 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.041514 kubelet[3591]: W0115 23:49:47.041169 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.043377 kubelet[3591]: E0115 23:49:47.043025 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.043377 kubelet[3591]: W0115 23:49:47.043059 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.043377 kubelet[3591]: E0115 23:49:47.043204 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.043377 kubelet[3591]: E0115 23:49:47.043237 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.045919 kubelet[3591]: E0115 23:49:47.045712 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.046378 kubelet[3591]: W0115 23:49:47.046323 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.046934 kubelet[3591]: E0115 23:49:47.046793 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.049144 kubelet[3591]: E0115 23:49:47.049018 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.049144 kubelet[3591]: W0115 23:49:47.049096 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.049696 kubelet[3591]: E0115 23:49:47.049607 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.053419 kubelet[3591]: E0115 23:49:47.053004 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.053419 kubelet[3591]: W0115 23:49:47.053043 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.055515 kubelet[3591]: E0115 23:49:47.055106 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.055515 kubelet[3591]: W0115 23:49:47.055153 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.058278 kubelet[3591]: E0115 23:49:47.057677 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.058278 kubelet[3591]: E0115 23:49:47.057750 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.059536 kubelet[3591]: E0115 23:49:47.058879 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.059536 kubelet[3591]: W0115 23:49:47.059125 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.059536 kubelet[3591]: E0115 23:49:47.059159 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.063747 kubelet[3591]: E0115 23:49:47.062766 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.063747 kubelet[3591]: W0115 23:49:47.062798 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.063747 kubelet[3591]: E0115 23:49:47.062846 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.066346 kubelet[3591]: E0115 23:49:47.066310 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.067688 kubelet[3591]: W0115 23:49:47.067359 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.067688 kubelet[3591]: E0115 23:49:47.067549 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.071042 kubelet[3591]: E0115 23:49:47.070796 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.071501 kubelet[3591]: W0115 23:49:47.071321 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.071501 kubelet[3591]: E0115 23:49:47.071369 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.073205 kubelet[3591]: E0115 23:49:47.073170 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:47.073562 kubelet[3591]: W0115 23:49:47.073532 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:47.073902 kubelet[3591]: E0115 23:49:47.073750 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:47.082445 containerd[1995]: time="2026-01-15T23:49:47.082388636Z" level=info msg="connecting to shim a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0" address="unix:///run/containerd/s/f3bca27471d044d58232b8336b60ca0fc842a1eb1d65c2f373e6d5709b55d291" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:49:47.146059 systemd[1]: Started cri-containerd-a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0.scope - libcontainer container a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0. Jan 15 23:49:47.192248 containerd[1995]: time="2026-01-15T23:49:47.192178353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7598b6d7-899zf,Uid:3de4ca59-f692-4220-ac4d-075ba293f0c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"59539e5a1ea993ce7c3bb9d69279650cd7de6aa2a030f8a1b8d03d9ba642b35c\"" Jan 15 23:49:47.197499 containerd[1995]: time="2026-01-15T23:49:47.197424141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 15 23:49:47.251733 containerd[1995]: time="2026-01-15T23:49:47.251631057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mldkc,Uid:9db4c8d7-8659-49d3-88ff-b928c96c15e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0\"" Jan 15 23:49:48.569831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483117898.mount: Deactivated successfully. Jan 15 23:49:48.855983 kubelet[3591]: E0115 23:49:48.855447 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:49:49.529043 containerd[1995]: time="2026-01-15T23:49:49.528976536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:49.530580 containerd[1995]: time="2026-01-15T23:49:49.530503896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 15 23:49:49.532828 containerd[1995]: time="2026-01-15T23:49:49.532750080Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:49.537843 containerd[1995]: time="2026-01-15T23:49:49.537398328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:49.538839 containerd[1995]: time="2026-01-15T23:49:49.538600632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.341078451s" Jan 15 23:49:49.538839 containerd[1995]: time="2026-01-15T23:49:49.538658520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 15 23:49:49.541991 containerd[1995]: time="2026-01-15T23:49:49.541924704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 15 23:49:49.570413 containerd[1995]: time="2026-01-15T23:49:49.570226536Z" level=info msg="CreateContainer within sandbox \"59539e5a1ea993ce7c3bb9d69279650cd7de6aa2a030f8a1b8d03d9ba642b35c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 15 23:49:49.587059 containerd[1995]: time="2026-01-15T23:49:49.587004600Z" level=info msg="Container d2c11f6b8f3eb2b80afd7d5fc6f8f18094165d00067cba858fa59c8526ea45e5: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:49:49.616007 containerd[1995]: time="2026-01-15T23:49:49.615935053Z" level=info msg="CreateContainer within sandbox \"59539e5a1ea993ce7c3bb9d69279650cd7de6aa2a030f8a1b8d03d9ba642b35c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d2c11f6b8f3eb2b80afd7d5fc6f8f18094165d00067cba858fa59c8526ea45e5\"" Jan 15 23:49:49.617039 containerd[1995]: time="2026-01-15T23:49:49.616981561Z" level=info msg="StartContainer for \"d2c11f6b8f3eb2b80afd7d5fc6f8f18094165d00067cba858fa59c8526ea45e5\"" Jan 15 23:49:49.620200 containerd[1995]: time="2026-01-15T23:49:49.620120617Z" level=info msg="connecting to shim d2c11f6b8f3eb2b80afd7d5fc6f8f18094165d00067cba858fa59c8526ea45e5" address="unix:///run/containerd/s/8cdfdd1f8d8611a2d4a7b54035452a2b918ce9207fa0067986267093b3be375d" protocol=ttrpc version=3 Jan 15 23:49:49.657801 systemd[1]: Started cri-containerd-d2c11f6b8f3eb2b80afd7d5fc6f8f18094165d00067cba858fa59c8526ea45e5.scope - libcontainer container d2c11f6b8f3eb2b80afd7d5fc6f8f18094165d00067cba858fa59c8526ea45e5. Jan 15 23:49:49.746042 containerd[1995]: time="2026-01-15T23:49:49.745715221Z" level=info msg="StartContainer for \"d2c11f6b8f3eb2b80afd7d5fc6f8f18094165d00067cba858fa59c8526ea45e5\" returns successfully" Jan 15 23:49:50.112873 kubelet[3591]: I0115 23:49:50.112780 3591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7598b6d7-899zf" podStartSLOduration=1.768488844 podStartE2EDuration="4.112756619s" podCreationTimestamp="2026-01-15 23:49:46 +0000 UTC" firstStartedPulling="2026-01-15 23:49:47.196227909 +0000 UTC m=+34.582784813" lastFinishedPulling="2026-01-15 23:49:49.540495696 +0000 UTC m=+36.927052588" observedRunningTime="2026-01-15 23:49:50.111986111 +0000 UTC m=+37.498543039" watchObservedRunningTime="2026-01-15 23:49:50.112756619 +0000 UTC m=+37.499313523" Jan 15 23:49:50.174294 kubelet[3591]: E0115 23:49:50.173799 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.174294 kubelet[3591]: W0115 23:49:50.173837 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.174294 kubelet[3591]: E0115 23:49:50.173867 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.175239 kubelet[3591]: E0115 23:49:50.175191 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.175760 kubelet[3591]: W0115 23:49:50.175432 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.176300 kubelet[3591]: E0115 23:49:50.175986 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.177502 kubelet[3591]: E0115 23:49:50.176582 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.177709 kubelet[3591]: W0115 23:49:50.177673 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.177851 kubelet[3591]: E0115 23:49:50.177828 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.178339 kubelet[3591]: E0115 23:49:50.178313 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.178761 kubelet[3591]: W0115 23:49:50.178514 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.178761 kubelet[3591]: E0115 23:49:50.178552 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.179106 kubelet[3591]: E0115 23:49:50.179079 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.179372 kubelet[3591]: W0115 23:49:50.179240 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.180397 kubelet[3591]: E0115 23:49:50.179543 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.180730 kubelet[3591]: E0115 23:49:50.180701 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.180852 kubelet[3591]: W0115 23:49:50.180827 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.181171 kubelet[3591]: E0115 23:49:50.180957 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.181406 kubelet[3591]: E0115 23:49:50.181369 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.181749 kubelet[3591]: W0115 23:49:50.181718 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.182318 kubelet[3591]: E0115 23:49:50.181871 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.183481 kubelet[3591]: E0115 23:49:50.183427 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.183644 kubelet[3591]: W0115 23:49:50.183617 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.183793 kubelet[3591]: E0115 23:49:50.183768 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.185525 kubelet[3591]: E0115 23:49:50.184638 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.185525 kubelet[3591]: W0115 23:49:50.184673 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.185525 kubelet[3591]: E0115 23:49:50.184703 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.185871 kubelet[3591]: E0115 23:49:50.185845 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.185975 kubelet[3591]: W0115 23:49:50.185953 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.186246 kubelet[3591]: E0115 23:49:50.186071 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.186848 kubelet[3591]: E0115 23:49:50.186817 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.187003 kubelet[3591]: W0115 23:49:50.186979 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.187287 kubelet[3591]: E0115 23:49:50.187096 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.187920 kubelet[3591]: E0115 23:49:50.187888 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.188096 kubelet[3591]: W0115 23:49:50.188071 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.188229 kubelet[3591]: E0115 23:49:50.188206 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.191055 kubelet[3591]: E0115 23:49:50.190781 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.191055 kubelet[3591]: W0115 23:49:50.190816 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.191055 kubelet[3591]: E0115 23:49:50.190849 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.191695 kubelet[3591]: E0115 23:49:50.191615 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.191695 kubelet[3591]: W0115 23:49:50.191646 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.192076 kubelet[3591]: E0115 23:49:50.191866 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.192366 kubelet[3591]: E0115 23:49:50.192340 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.192513 kubelet[3591]: W0115 23:49:50.192456 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.192721 kubelet[3591]: E0115 23:49:50.192624 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.253494 kubelet[3591]: E0115 23:49:50.253327 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.253494 kubelet[3591]: W0115 23:49:50.253378 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.253494 kubelet[3591]: E0115 23:49:50.253414 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.254059 kubelet[3591]: E0115 23:49:50.254021 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.254059 kubelet[3591]: W0115 23:49:50.254054 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.254173 kubelet[3591]: E0115 23:49:50.254101 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.254578 kubelet[3591]: E0115 23:49:50.254540 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.254578 kubelet[3591]: W0115 23:49:50.254569 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.254885 kubelet[3591]: E0115 23:49:50.254605 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.256704 kubelet[3591]: E0115 23:49:50.256650 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.256704 kubelet[3591]: W0115 23:49:50.256692 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.256908 kubelet[3591]: E0115 23:49:50.256741 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.258149 kubelet[3591]: E0115 23:49:50.258094 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.258149 kubelet[3591]: W0115 23:49:50.258135 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.258422 kubelet[3591]: E0115 23:49:50.258364 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.258732 kubelet[3591]: E0115 23:49:50.258692 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.258732 kubelet[3591]: W0115 23:49:50.258722 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.258872 kubelet[3591]: E0115 23:49:50.258837 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.259264 kubelet[3591]: E0115 23:49:50.259227 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.259264 kubelet[3591]: W0115 23:49:50.259257 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.259432 kubelet[3591]: E0115 23:49:50.259396 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.259846 kubelet[3591]: E0115 23:49:50.259807 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.259846 kubelet[3591]: W0115 23:49:50.259838 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.259969 kubelet[3591]: E0115 23:49:50.259905 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.261000 kubelet[3591]: E0115 23:49:50.260953 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.261000 kubelet[3591]: W0115 23:49:50.260988 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.261163 kubelet[3591]: E0115 23:49:50.261032 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.261665 kubelet[3591]: E0115 23:49:50.261616 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.261665 kubelet[3591]: W0115 23:49:50.261650 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.261785 kubelet[3591]: E0115 23:49:50.261765 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.262197 kubelet[3591]: E0115 23:49:50.262053 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.262197 kubelet[3591]: W0115 23:49:50.262082 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.262597 kubelet[3591]: E0115 23:49:50.262252 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.262810 kubelet[3591]: E0115 23:49:50.262773 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.263069 kubelet[3591]: W0115 23:49:50.262806 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.263069 kubelet[3591]: E0115 23:49:50.262869 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.263444 kubelet[3591]: E0115 23:49:50.263122 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.263444 kubelet[3591]: W0115 23:49:50.263138 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.263444 kubelet[3591]: E0115 23:49:50.263307 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.263931 kubelet[3591]: E0115 23:49:50.263742 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.263931 kubelet[3591]: W0115 23:49:50.263763 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.263931 kubelet[3591]: E0115 23:49:50.263917 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.264687 kubelet[3591]: E0115 23:49:50.264649 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.264687 kubelet[3591]: W0115 23:49:50.264683 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.264966 kubelet[3591]: E0115 23:49:50.264920 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.265676 kubelet[3591]: E0115 23:49:50.265580 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.265676 kubelet[3591]: W0115 23:49:50.265639 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.266184 kubelet[3591]: E0115 23:49:50.265931 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.267301 kubelet[3591]: E0115 23:49:50.267228 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.267809 kubelet[3591]: W0115 23:49:50.267622 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.267809 kubelet[3591]: E0115 23:49:50.267658 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.269784 kubelet[3591]: E0115 23:49:50.269756 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:50.270006 kubelet[3591]: W0115 23:49:50.269982 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:50.270217 kubelet[3591]: E0115 23:49:50.270194 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:50.856376 kubelet[3591]: E0115 23:49:50.855907 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:49:50.934774 containerd[1995]: time="2026-01-15T23:49:50.934696743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:50.938103 containerd[1995]: time="2026-01-15T23:49:50.937657359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 15 23:49:50.940152 containerd[1995]: time="2026-01-15T23:49:50.940094883Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:50.944617 containerd[1995]: time="2026-01-15T23:49:50.944566239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:50.946075 containerd[1995]: time="2026-01-15T23:49:50.946012455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.403872039s" Jan 15 23:49:50.946214 containerd[1995]: time="2026-01-15T23:49:50.946073643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 15 23:49:50.951620 containerd[1995]: time="2026-01-15T23:49:50.951556179Z" level=info msg="CreateContainer within sandbox \"a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 15 23:49:50.974584 containerd[1995]: time="2026-01-15T23:49:50.973218699Z" level=info msg="Container 94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:49:50.980430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325667747.mount: Deactivated successfully. Jan 15 23:49:50.999530 containerd[1995]: time="2026-01-15T23:49:50.999286647Z" level=info msg="CreateContainer within sandbox \"a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694\"" Jan 15 23:49:51.000684 containerd[1995]: time="2026-01-15T23:49:51.000601415Z" level=info msg="StartContainer for \"94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694\"" Jan 15 23:49:51.008526 containerd[1995]: time="2026-01-15T23:49:51.007312199Z" level=info msg="connecting to shim 94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694" address="unix:///run/containerd/s/f3bca27471d044d58232b8336b60ca0fc842a1eb1d65c2f373e6d5709b55d291" protocol=ttrpc version=3 Jan 15 23:49:51.059835 systemd[1]: Started cri-containerd-94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694.scope - libcontainer container 94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694. Jan 15 23:49:51.101076 kubelet[3591]: E0115 23:49:51.101025 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.101300 kubelet[3591]: W0115 23:49:51.101064 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.101300 kubelet[3591]: E0115 23:49:51.101126 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.101684 kubelet[3591]: E0115 23:49:51.101611 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.101684 kubelet[3591]: W0115 23:49:51.101667 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.101883 kubelet[3591]: E0115 23:49:51.101692 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.102134 kubelet[3591]: E0115 23:49:51.102100 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.102134 kubelet[3591]: W0115 23:49:51.102128 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.102404 kubelet[3591]: E0115 23:49:51.102151 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.102623 kubelet[3591]: E0115 23:49:51.102589 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.102623 kubelet[3591]: W0115 23:49:51.102617 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.102779 kubelet[3591]: E0115 23:49:51.102640 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.104252 kubelet[3591]: E0115 23:49:51.104173 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.104252 kubelet[3591]: W0115 23:49:51.104194 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.104252 kubelet[3591]: E0115 23:49:51.104245 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.104759 kubelet[3591]: E0115 23:49:51.104725 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.104759 kubelet[3591]: W0115 23:49:51.104743 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.104759 kubelet[3591]: E0115 23:49:51.104792 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.105218 kubelet[3591]: E0115 23:49:51.105146 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.105218 kubelet[3591]: W0115 23:49:51.105162 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.105218 kubelet[3591]: E0115 23:49:51.105180 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.106856 kubelet[3591]: E0115 23:49:51.106535 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.106856 kubelet[3591]: W0115 23:49:51.106718 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.106856 kubelet[3591]: E0115 23:49:51.106910 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.108920 kubelet[3591]: E0115 23:49:51.108889 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.109219 kubelet[3591]: W0115 23:49:51.108986 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.109219 kubelet[3591]: E0115 23:49:51.109021 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.109760 kubelet[3591]: E0115 23:49:51.109644 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.109760 kubelet[3591]: W0115 23:49:51.109668 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.109760 kubelet[3591]: E0115 23:49:51.109690 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.110156 kubelet[3591]: E0115 23:49:51.110137 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.110320 kubelet[3591]: W0115 23:49:51.110260 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.110491 kubelet[3591]: E0115 23:49:51.110294 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.110844 kubelet[3591]: E0115 23:49:51.110824 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.110947 kubelet[3591]: W0115 23:49:51.110927 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.111064 kubelet[3591]: E0115 23:49:51.111041 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.111484 kubelet[3591]: E0115 23:49:51.111428 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.111484 kubelet[3591]: W0115 23:49:51.111448 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.111749 kubelet[3591]: E0115 23:49:51.111656 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.112210 kubelet[3591]: E0115 23:49:51.112099 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.112210 kubelet[3591]: W0115 23:49:51.112121 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.112210 kubelet[3591]: E0115 23:49:51.112141 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.112867 kubelet[3591]: E0115 23:49:51.112798 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.112867 kubelet[3591]: W0115 23:49:51.112824 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.113841 kubelet[3591]: E0115 23:49:51.112968 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.167230 kubelet[3591]: E0115 23:49:51.167146 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.167643 kubelet[3591]: W0115 23:49:51.167180 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.167643 kubelet[3591]: E0115 23:49:51.167340 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.169606 kubelet[3591]: E0115 23:49:51.169215 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.169606 kubelet[3591]: W0115 23:49:51.169290 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.169606 kubelet[3591]: E0115 23:49:51.169415 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.170780 kubelet[3591]: E0115 23:49:51.170745 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.170930 kubelet[3591]: W0115 23:49:51.170906 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.171063 kubelet[3591]: E0115 23:49:51.171041 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.171649 kubelet[3591]: E0115 23:49:51.171623 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.171831 kubelet[3591]: W0115 23:49:51.171808 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.171995 kubelet[3591]: E0115 23:49:51.171937 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.172589 kubelet[3591]: E0115 23:49:51.172435 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.172734 kubelet[3591]: W0115 23:49:51.172708 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.172893 kubelet[3591]: E0115 23:49:51.172840 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.173691 kubelet[3591]: E0115 23:49:51.173622 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.173691 kubelet[3591]: W0115 23:49:51.173652 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.174367 kubelet[3591]: E0115 23:49:51.174170 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.174847 kubelet[3591]: E0115 23:49:51.174785 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.174847 kubelet[3591]: W0115 23:49:51.174814 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.175105 kubelet[3591]: E0115 23:49:51.174994 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.175562 kubelet[3591]: E0115 23:49:51.175538 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.175746 kubelet[3591]: W0115 23:49:51.175654 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.176096 kubelet[3591]: E0115 23:49:51.176069 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.176796 kubelet[3591]: E0115 23:49:51.176770 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.177078 kubelet[3591]: W0115 23:49:51.176904 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.177593 kubelet[3591]: E0115 23:49:51.177570 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.177702 kubelet[3591]: W0115 23:49:51.177678 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.178723 kubelet[3591]: E0115 23:49:51.178298 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.178723 kubelet[3591]: E0115 23:49:51.178671 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.179087 kubelet[3591]: E0115 23:49:51.179026 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.179087 kubelet[3591]: W0115 23:49:51.179047 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.179246 kubelet[3591]: E0115 23:49:51.179217 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.180505 kubelet[3591]: E0115 23:49:51.180300 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.180505 kubelet[3591]: W0115 23:49:51.180332 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.180748 kubelet[3591]: E0115 23:49:51.180723 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.181356 kubelet[3591]: E0115 23:49:51.181334 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.182379 kubelet[3591]: W0115 23:49:51.181425 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.182379 kubelet[3591]: E0115 23:49:51.181792 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.182379 kubelet[3591]: E0115 23:49:51.181964 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.182379 kubelet[3591]: W0115 23:49:51.181979 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.182379 kubelet[3591]: E0115 23:49:51.182082 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.184009 kubelet[3591]: E0115 23:49:51.183705 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.184202 kubelet[3591]: W0115 23:49:51.184166 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.184392 kubelet[3591]: E0115 23:49:51.184350 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.185173 kubelet[3591]: E0115 23:49:51.185135 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.185319 kubelet[3591]: W0115 23:49:51.185291 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.185598 kubelet[3591]: E0115 23:49:51.185450 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.188902 kubelet[3591]: E0115 23:49:51.188582 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.188902 kubelet[3591]: W0115 23:49:51.188625 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.188902 kubelet[3591]: E0115 23:49:51.188671 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.189533 kubelet[3591]: E0115 23:49:51.189390 3591 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 15 23:49:51.189533 kubelet[3591]: W0115 23:49:51.189418 3591 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 15 23:49:51.189533 kubelet[3591]: E0115 23:49:51.189445 3591 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 15 23:49:51.189817 containerd[1995]: time="2026-01-15T23:49:51.189639408Z" level=info msg="StartContainer for \"94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694\" returns successfully" Jan 15 23:49:51.216510 systemd[1]: cri-containerd-94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694.scope: Deactivated successfully. Jan 15 23:49:51.227375 containerd[1995]: time="2026-01-15T23:49:51.227285701Z" level=info msg="received container exit event container_id:\"94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694\" id:\"94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694\" pid:4228 exited_at:{seconds:1768520991 nanos:226833469}" Jan 15 23:49:51.272494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94bdc75890cd0388e68a1de6fe10d0d105dcc970c0c5067a281e798562310694-rootfs.mount: Deactivated successfully. Jan 15 23:49:52.100570 containerd[1995]: time="2026-01-15T23:49:52.100446841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 15 23:49:52.859532 kubelet[3591]: E0115 23:49:52.858321 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:49:54.855728 kubelet[3591]: E0115 23:49:54.855393 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:49:55.886206 containerd[1995]: time="2026-01-15T23:49:55.886130228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:55.888097 containerd[1995]: time="2026-01-15T23:49:55.887986580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 15 23:49:55.889491 containerd[1995]: time="2026-01-15T23:49:55.889119056Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:55.893691 containerd[1995]: time="2026-01-15T23:49:55.893642852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:49:55.895998 containerd[1995]: time="2026-01-15T23:49:55.895925036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.795381535s" Jan 15 23:49:55.896118 containerd[1995]: time="2026-01-15T23:49:55.896011628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 15 23:49:55.901835 containerd[1995]: time="2026-01-15T23:49:55.901750040Z" level=info msg="CreateContainer within sandbox \"a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 15 23:49:55.920510 containerd[1995]: time="2026-01-15T23:49:55.918751364Z" level=info msg="Container ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:49:55.930425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172906705.mount: Deactivated successfully. Jan 15 23:49:55.947101 containerd[1995]: time="2026-01-15T23:49:55.947005880Z" level=info msg="CreateContainer within sandbox \"a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627\"" Jan 15 23:49:55.952594 containerd[1995]: time="2026-01-15T23:49:55.951885728Z" level=info msg="StartContainer for \"ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627\"" Jan 15 23:49:55.959500 containerd[1995]: time="2026-01-15T23:49:55.959412476Z" level=info msg="connecting to shim ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627" address="unix:///run/containerd/s/f3bca27471d044d58232b8336b60ca0fc842a1eb1d65c2f373e6d5709b55d291" protocol=ttrpc version=3 Jan 15 23:49:56.007789 systemd[1]: Started cri-containerd-ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627.scope - libcontainer container ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627. Jan 15 23:49:56.112185 containerd[1995]: time="2026-01-15T23:49:56.112004141Z" level=info msg="StartContainer for \"ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627\" returns successfully" Jan 15 23:49:56.857509 kubelet[3591]: E0115 23:49:56.857322 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:49:57.074854 systemd[1]: cri-containerd-ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627.scope: Deactivated successfully. Jan 15 23:49:57.075928 systemd[1]: cri-containerd-ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627.scope: Consumed 923ms CPU time, 193.8M memory peak, 165.9M written to disk. Jan 15 23:49:57.080365 containerd[1995]: time="2026-01-15T23:49:57.080296818Z" level=info msg="received container exit event container_id:\"ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627\" id:\"ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627\" pid:4324 exited_at:{seconds:1768520997 nanos:79570878}" Jan 15 23:49:57.095940 kubelet[3591]: I0115 23:49:57.095886 3591 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 15 23:49:57.164127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ace292f8c1f225b55de728cc84dd265c7b88ba3bf40c107a745a9c9f6461d627-rootfs.mount: Deactivated successfully. Jan 15 23:49:57.216772 systemd[1]: Created slice kubepods-burstable-pod6b6e1659_8c34_4e70_a449_e806105116b0.slice - libcontainer container kubepods-burstable-pod6b6e1659_8c34_4e70_a449_e806105116b0.slice. Jan 15 23:49:57.285397 systemd[1]: Created slice kubepods-besteffort-pod250297aa_f2ed_4da8_b086_a79052c5e783.slice - libcontainer container kubepods-besteffort-pod250297aa_f2ed_4da8_b086_a79052c5e783.slice. Jan 15 23:49:57.314187 systemd[1]: Created slice kubepods-burstable-pod8c717cde_58a7_4b04_87d8_59853ebab9ea.slice - libcontainer container kubepods-burstable-pod8c717cde_58a7_4b04_87d8_59853ebab9ea.slice. Jan 15 23:49:57.318508 kubelet[3591]: I0115 23:49:57.317363 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b6e1659-8c34-4e70-a449-e806105116b0-config-volume\") pod \"coredns-668d6bf9bc-5c5b4\" (UID: \"6b6e1659-8c34-4e70-a449-e806105116b0\") " pod="kube-system/coredns-668d6bf9bc-5c5b4" Jan 15 23:49:57.318508 kubelet[3591]: I0115 23:49:57.317436 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/250297aa-f2ed-4da8-b086-a79052c5e783-tigera-ca-bundle\") pod \"calico-kube-controllers-5fdbbb9d69-q7mqz\" (UID: \"250297aa-f2ed-4da8-b086-a79052c5e783\") " pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" Jan 15 23:49:57.320510 kubelet[3591]: I0115 23:49:57.320392 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkbrk\" (UniqueName: \"kubernetes.io/projected/8c717cde-58a7-4b04-87d8-59853ebab9ea-kube-api-access-pkbrk\") pod \"coredns-668d6bf9bc-9kccb\" (UID: \"8c717cde-58a7-4b04-87d8-59853ebab9ea\") " pod="kube-system/coredns-668d6bf9bc-9kccb" Jan 15 23:49:57.320510 kubelet[3591]: I0115 23:49:57.320509 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c717cde-58a7-4b04-87d8-59853ebab9ea-config-volume\") pod \"coredns-668d6bf9bc-9kccb\" (UID: \"8c717cde-58a7-4b04-87d8-59853ebab9ea\") " pod="kube-system/coredns-668d6bf9bc-9kccb" Jan 15 23:49:57.320827 kubelet[3591]: I0115 23:49:57.320586 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgmf7\" (UniqueName: \"kubernetes.io/projected/250297aa-f2ed-4da8-b086-a79052c5e783-kube-api-access-lgmf7\") pod \"calico-kube-controllers-5fdbbb9d69-q7mqz\" (UID: \"250297aa-f2ed-4da8-b086-a79052c5e783\") " pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" Jan 15 23:49:57.320827 kubelet[3591]: I0115 23:49:57.320625 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4vcw\" (UniqueName: \"kubernetes.io/projected/6b6e1659-8c34-4e70-a449-e806105116b0-kube-api-access-k4vcw\") pod \"coredns-668d6bf9bc-5c5b4\" (UID: \"6b6e1659-8c34-4e70-a449-e806105116b0\") " pod="kube-system/coredns-668d6bf9bc-5c5b4" Jan 15 23:49:57.342534 systemd[1]: Created slice kubepods-besteffort-pod7292bea6_012f_4e29_ba2d_73a4ea488a56.slice - libcontainer container kubepods-besteffort-pod7292bea6_012f_4e29_ba2d_73a4ea488a56.slice. Jan 15 23:49:57.360702 systemd[1]: Created slice kubepods-besteffort-pod3a3a871b_481b_4197_950a_9e2f48b0e53a.slice - libcontainer container kubepods-besteffort-pod3a3a871b_481b_4197_950a_9e2f48b0e53a.slice. Jan 15 23:49:57.383558 systemd[1]: Created slice kubepods-besteffort-podf0772df0_c398_4efd_9017_9dcc4fd8a789.slice - libcontainer container kubepods-besteffort-podf0772df0_c398_4efd_9017_9dcc4fd8a789.slice. Jan 15 23:49:57.398991 systemd[1]: Created slice kubepods-besteffort-pod5b0fb1bc_c77b_46e5_94d7_ad2de2073aa0.slice - libcontainer container kubepods-besteffort-pod5b0fb1bc_c77b_46e5_94d7_ad2de2073aa0.slice. Jan 15 23:49:57.434765 kubelet[3591]: I0115 23:49:57.421180 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a3a871b-481b-4197-950a-9e2f48b0e53a-calico-apiserver-certs\") pod \"calico-apiserver-6c69b78f6b-zzmq4\" (UID: \"3a3a871b-481b-4197-950a-9e2f48b0e53a\") " pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" Jan 15 23:49:57.434765 kubelet[3591]: I0115 23:49:57.421521 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7292bea6-012f-4e29-ba2d-73a4ea488a56-goldmane-key-pair\") pod \"goldmane-666569f655-zcqlh\" (UID: \"7292bea6-012f-4e29-ba2d-73a4ea488a56\") " pod="calico-system/goldmane-666569f655-zcqlh" Jan 15 23:49:57.434765 kubelet[3591]: I0115 23:49:57.421738 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bdw9\" (UniqueName: \"kubernetes.io/projected/f0772df0-c398-4efd-9017-9dcc4fd8a789-kube-api-access-7bdw9\") pod \"whisker-79bfb57d86-zc2fb\" (UID: \"f0772df0-c398-4efd-9017-9dcc4fd8a789\") " pod="calico-system/whisker-79bfb57d86-zc2fb" Jan 15 23:49:57.434765 kubelet[3591]: I0115 23:49:57.421921 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdjv\" (UniqueName: \"kubernetes.io/projected/5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0-kube-api-access-xgdjv\") pod \"calico-apiserver-6c69b78f6b-96q62\" (UID: \"5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0\") " pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" Jan 15 23:49:57.434765 kubelet[3591]: I0115 23:49:57.422197 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7292bea6-012f-4e29-ba2d-73a4ea488a56-config\") pod \"goldmane-666569f655-zcqlh\" (UID: \"7292bea6-012f-4e29-ba2d-73a4ea488a56\") " pod="calico-system/goldmane-666569f655-zcqlh" Jan 15 23:49:57.435129 kubelet[3591]: I0115 23:49:57.422238 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7292bea6-012f-4e29-ba2d-73a4ea488a56-goldmane-ca-bundle\") pod \"goldmane-666569f655-zcqlh\" (UID: \"7292bea6-012f-4e29-ba2d-73a4ea488a56\") " pod="calico-system/goldmane-666569f655-zcqlh" Jan 15 23:49:57.435129 kubelet[3591]: I0115 23:49:57.422352 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0772df0-c398-4efd-9017-9dcc4fd8a789-whisker-backend-key-pair\") pod \"whisker-79bfb57d86-zc2fb\" (UID: \"f0772df0-c398-4efd-9017-9dcc4fd8a789\") " pod="calico-system/whisker-79bfb57d86-zc2fb" Jan 15 23:49:57.435129 kubelet[3591]: I0115 23:49:57.422396 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0772df0-c398-4efd-9017-9dcc4fd8a789-whisker-ca-bundle\") pod \"whisker-79bfb57d86-zc2fb\" (UID: \"f0772df0-c398-4efd-9017-9dcc4fd8a789\") " pod="calico-system/whisker-79bfb57d86-zc2fb" Jan 15 23:49:57.435129 kubelet[3591]: I0115 23:49:57.422437 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0-calico-apiserver-certs\") pod \"calico-apiserver-6c69b78f6b-96q62\" (UID: \"5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0\") " pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" Jan 15 23:49:57.435129 kubelet[3591]: I0115 23:49:57.425036 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w54nh\" (UniqueName: \"kubernetes.io/projected/3a3a871b-481b-4197-950a-9e2f48b0e53a-kube-api-access-w54nh\") pod \"calico-apiserver-6c69b78f6b-zzmq4\" (UID: \"3a3a871b-481b-4197-950a-9e2f48b0e53a\") " pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" Jan 15 23:49:57.435386 kubelet[3591]: I0115 23:49:57.425123 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwng9\" (UniqueName: \"kubernetes.io/projected/7292bea6-012f-4e29-ba2d-73a4ea488a56-kube-api-access-jwng9\") pod \"goldmane-666569f655-zcqlh\" (UID: \"7292bea6-012f-4e29-ba2d-73a4ea488a56\") " pod="calico-system/goldmane-666569f655-zcqlh" Jan 15 23:49:57.570512 containerd[1995]: time="2026-01-15T23:49:57.570374036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5c5b4,Uid:6b6e1659-8c34-4e70-a449-e806105116b0,Namespace:kube-system,Attempt:0,}" Jan 15 23:49:57.599702 containerd[1995]: time="2026-01-15T23:49:57.598481540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fdbbb9d69-q7mqz,Uid:250297aa-f2ed-4da8-b086-a79052c5e783,Namespace:calico-system,Attempt:0,}" Jan 15 23:49:57.636023 containerd[1995]: time="2026-01-15T23:49:57.635957324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9kccb,Uid:8c717cde-58a7-4b04-87d8-59853ebab9ea,Namespace:kube-system,Attempt:0,}" Jan 15 23:49:57.675065 containerd[1995]: time="2026-01-15T23:49:57.674984613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c69b78f6b-zzmq4,Uid:3a3a871b-481b-4197-950a-9e2f48b0e53a,Namespace:calico-apiserver,Attempt:0,}" Jan 15 23:49:57.696344 containerd[1995]: time="2026-01-15T23:49:57.696202521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bfb57d86-zc2fb,Uid:f0772df0-c398-4efd-9017-9dcc4fd8a789,Namespace:calico-system,Attempt:0,}" Jan 15 23:49:57.708498 containerd[1995]: time="2026-01-15T23:49:57.707404869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c69b78f6b-96q62,Uid:5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0,Namespace:calico-apiserver,Attempt:0,}" Jan 15 23:49:57.954875 containerd[1995]: time="2026-01-15T23:49:57.954455446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zcqlh,Uid:7292bea6-012f-4e29-ba2d-73a4ea488a56,Namespace:calico-system,Attempt:0,}" Jan 15 23:49:58.156188 containerd[1995]: time="2026-01-15T23:49:58.155774575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 15 23:49:58.245148 containerd[1995]: time="2026-01-15T23:49:58.244692427Z" level=error msg="Failed to destroy network for sandbox \"3923105c489d92d3141d9f154c8e340bf9ecc24e80ad3e89808f507c88bbd62b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.249892 containerd[1995]: time="2026-01-15T23:49:58.248040247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bfb57d86-zc2fb,Uid:f0772df0-c398-4efd-9017-9dcc4fd8a789,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3923105c489d92d3141d9f154c8e340bf9ecc24e80ad3e89808f507c88bbd62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.250494 kubelet[3591]: E0115 23:49:58.250392 3591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3923105c489d92d3141d9f154c8e340bf9ecc24e80ad3e89808f507c88bbd62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.252164 systemd[1]: run-netns-cni\x2de6e3ce7e\x2df9e8\x2d87ba\x2d8113\x2d463b28f570e7.mount: Deactivated successfully. Jan 15 23:49:58.256914 kubelet[3591]: E0115 23:49:58.254384 3591 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3923105c489d92d3141d9f154c8e340bf9ecc24e80ad3e89808f507c88bbd62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bfb57d86-zc2fb" Jan 15 23:49:58.256914 kubelet[3591]: E0115 23:49:58.254441 3591 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3923105c489d92d3141d9f154c8e340bf9ecc24e80ad3e89808f507c88bbd62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bfb57d86-zc2fb" Jan 15 23:49:58.256914 kubelet[3591]: E0115 23:49:58.254547 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79bfb57d86-zc2fb_calico-system(f0772df0-c398-4efd-9017-9dcc4fd8a789)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79bfb57d86-zc2fb_calico-system(f0772df0-c398-4efd-9017-9dcc4fd8a789)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3923105c489d92d3141d9f154c8e340bf9ecc24e80ad3e89808f507c88bbd62b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79bfb57d86-zc2fb" podUID="f0772df0-c398-4efd-9017-9dcc4fd8a789" Jan 15 23:49:58.261402 containerd[1995]: time="2026-01-15T23:49:58.261290576Z" level=error msg="Failed to destroy network for sandbox \"70926bbaa7c99c194012808b63371ca4ffa18ce729d5ac32e918356e97968d88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.268511 containerd[1995]: time="2026-01-15T23:49:58.267851768Z" level=error msg="Failed to destroy network for sandbox \"71384bd76371442358e3bab1ffcba708cdaedd228c2d7c642eaf3f83b7c0a454\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.269707 systemd[1]: run-netns-cni\x2d70dc6fc4\x2db205\x2d0fa5\x2df8c3\x2d7f5ae6bb8af7.mount: Deactivated successfully. Jan 15 23:49:58.276853 containerd[1995]: time="2026-01-15T23:49:58.274926944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c69b78f6b-zzmq4,Uid:3a3a871b-481b-4197-950a-9e2f48b0e53a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71384bd76371442358e3bab1ffcba708cdaedd228c2d7c642eaf3f83b7c0a454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.278106 systemd[1]: run-netns-cni\x2d4c29fcae\x2d801a\x2d7bf7\x2d8816\x2d3a451300090e.mount: Deactivated successfully. Jan 15 23:49:58.284740 kubelet[3591]: E0115 23:49:58.284688 3591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71384bd76371442358e3bab1ffcba708cdaedd228c2d7c642eaf3f83b7c0a454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.286290 kubelet[3591]: E0115 23:49:58.284934 3591 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71384bd76371442358e3bab1ffcba708cdaedd228c2d7c642eaf3f83b7c0a454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" Jan 15 23:49:58.286290 kubelet[3591]: E0115 23:49:58.284975 3591 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71384bd76371442358e3bab1ffcba708cdaedd228c2d7c642eaf3f83b7c0a454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" Jan 15 23:49:58.286290 kubelet[3591]: E0115 23:49:58.285061 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c69b78f6b-zzmq4_calico-apiserver(3a3a871b-481b-4197-950a-9e2f48b0e53a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c69b78f6b-zzmq4_calico-apiserver(3a3a871b-481b-4197-950a-9e2f48b0e53a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71384bd76371442358e3bab1ffcba708cdaedd228c2d7c642eaf3f83b7c0a454\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" podUID="3a3a871b-481b-4197-950a-9e2f48b0e53a" Jan 15 23:49:58.287212 containerd[1995]: time="2026-01-15T23:49:58.286883540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9kccb,Uid:8c717cde-58a7-4b04-87d8-59853ebab9ea,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70926bbaa7c99c194012808b63371ca4ffa18ce729d5ac32e918356e97968d88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.287673 kubelet[3591]: E0115 23:49:58.287604 3591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70926bbaa7c99c194012808b63371ca4ffa18ce729d5ac32e918356e97968d88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.287756 kubelet[3591]: E0115 23:49:58.287682 3591 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70926bbaa7c99c194012808b63371ca4ffa18ce729d5ac32e918356e97968d88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9kccb" Jan 15 23:49:58.287756 kubelet[3591]: E0115 23:49:58.287716 3591 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70926bbaa7c99c194012808b63371ca4ffa18ce729d5ac32e918356e97968d88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9kccb" Jan 15 23:49:58.287883 kubelet[3591]: E0115 23:49:58.287789 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9kccb_kube-system(8c717cde-58a7-4b04-87d8-59853ebab9ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9kccb_kube-system(8c717cde-58a7-4b04-87d8-59853ebab9ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70926bbaa7c99c194012808b63371ca4ffa18ce729d5ac32e918356e97968d88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9kccb" podUID="8c717cde-58a7-4b04-87d8-59853ebab9ea" Jan 15 23:49:58.291958 containerd[1995]: time="2026-01-15T23:49:58.291714620Z" level=error msg="Failed to destroy network for sandbox \"b8a290c6a371e151559d7c98e8acc6d48dd6bd17010b7048b33229d9d70c583e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.300446 containerd[1995]: time="2026-01-15T23:49:58.300170504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fdbbb9d69-q7mqz,Uid:250297aa-f2ed-4da8-b086-a79052c5e783,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a290c6a371e151559d7c98e8acc6d48dd6bd17010b7048b33229d9d70c583e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.301010 systemd[1]: run-netns-cni\x2d859b63ae\x2d857b\x2d756a\x2dc5ff\x2dd8b0d651306c.mount: Deactivated successfully. Jan 15 23:49:58.305042 kubelet[3591]: E0115 23:49:58.304941 3591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a290c6a371e151559d7c98e8acc6d48dd6bd17010b7048b33229d9d70c583e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.305042 kubelet[3591]: E0115 23:49:58.305027 3591 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a290c6a371e151559d7c98e8acc6d48dd6bd17010b7048b33229d9d70c583e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" Jan 15 23:49:58.305274 kubelet[3591]: E0115 23:49:58.305065 3591 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a290c6a371e151559d7c98e8acc6d48dd6bd17010b7048b33229d9d70c583e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" Jan 15 23:49:58.306286 kubelet[3591]: E0115 23:49:58.305386 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fdbbb9d69-q7mqz_calico-system(250297aa-f2ed-4da8-b086-a79052c5e783)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fdbbb9d69-q7mqz_calico-system(250297aa-f2ed-4da8-b086-a79052c5e783)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8a290c6a371e151559d7c98e8acc6d48dd6bd17010b7048b33229d9d70c583e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:49:58.316766 containerd[1995]: time="2026-01-15T23:49:58.316536284Z" level=error msg="Failed to destroy network for sandbox \"05af7a8e7742639c3565247de1876d677f89f3c1d34d7da2674fe2f49caa7d59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.319023 containerd[1995]: time="2026-01-15T23:49:58.318956096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c69b78f6b-96q62,Uid:5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05af7a8e7742639c3565247de1876d677f89f3c1d34d7da2674fe2f49caa7d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.319573 containerd[1995]: time="2026-01-15T23:49:58.319117736Z" level=error msg="Failed to destroy network for sandbox \"fd018207bace91dfe51535ed698edf40676b7cd9159053c1cafadc4e9588ddd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.319750 kubelet[3591]: E0115 23:49:58.319656 3591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05af7a8e7742639c3565247de1876d677f89f3c1d34d7da2674fe2f49caa7d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.319821 kubelet[3591]: E0115 23:49:58.319770 3591 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05af7a8e7742639c3565247de1876d677f89f3c1d34d7da2674fe2f49caa7d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" Jan 15 23:49:58.319945 kubelet[3591]: E0115 23:49:58.319832 3591 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05af7a8e7742639c3565247de1876d677f89f3c1d34d7da2674fe2f49caa7d59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" Jan 15 23:49:58.320009 kubelet[3591]: E0115 23:49:58.319945 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c69b78f6b-96q62_calico-apiserver(5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c69b78f6b-96q62_calico-apiserver(5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05af7a8e7742639c3565247de1876d677f89f3c1d34d7da2674fe2f49caa7d59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:49:58.322116 containerd[1995]: time="2026-01-15T23:49:58.321848408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5c5b4,Uid:6b6e1659-8c34-4e70-a449-e806105116b0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd018207bace91dfe51535ed698edf40676b7cd9159053c1cafadc4e9588ddd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.324484 kubelet[3591]: E0115 23:49:58.323961 3591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd018207bace91dfe51535ed698edf40676b7cd9159053c1cafadc4e9588ddd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.324484 kubelet[3591]: E0115 23:49:58.324054 3591 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd018207bace91dfe51535ed698edf40676b7cd9159053c1cafadc4e9588ddd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5c5b4" Jan 15 23:49:58.324484 kubelet[3591]: E0115 23:49:58.324090 3591 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd018207bace91dfe51535ed698edf40676b7cd9159053c1cafadc4e9588ddd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5c5b4" Jan 15 23:49:58.324771 kubelet[3591]: E0115 23:49:58.324559 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5c5b4_kube-system(6b6e1659-8c34-4e70-a449-e806105116b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5c5b4_kube-system(6b6e1659-8c34-4e70-a449-e806105116b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd018207bace91dfe51535ed698edf40676b7cd9159053c1cafadc4e9588ddd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5c5b4" podUID="6b6e1659-8c34-4e70-a449-e806105116b0" Jan 15 23:49:58.343571 containerd[1995]: time="2026-01-15T23:49:58.343438340Z" level=error msg="Failed to destroy network for sandbox \"3e0a9acc465daef1eac6b444b50a1f9340224daf21b32e70b0067748fffb0d27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.345062 containerd[1995]: time="2026-01-15T23:49:58.344999120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zcqlh,Uid:7292bea6-012f-4e29-ba2d-73a4ea488a56,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0a9acc465daef1eac6b444b50a1f9340224daf21b32e70b0067748fffb0d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.345617 kubelet[3591]: E0115 23:49:58.345376 3591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0a9acc465daef1eac6b444b50a1f9340224daf21b32e70b0067748fffb0d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.345617 kubelet[3591]: E0115 23:49:58.345484 3591 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0a9acc465daef1eac6b444b50a1f9340224daf21b32e70b0067748fffb0d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zcqlh" Jan 15 23:49:58.345617 kubelet[3591]: E0115 23:49:58.345538 3591 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0a9acc465daef1eac6b444b50a1f9340224daf21b32e70b0067748fffb0d27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zcqlh" Jan 15 23:49:58.346213 kubelet[3591]: E0115 23:49:58.345606 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-zcqlh_calico-system(7292bea6-012f-4e29-ba2d-73a4ea488a56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-zcqlh_calico-system(7292bea6-012f-4e29-ba2d-73a4ea488a56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e0a9acc465daef1eac6b444b50a1f9340224daf21b32e70b0067748fffb0d27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:49:58.870277 systemd[1]: Created slice kubepods-besteffort-pod9fb7073f_5e73_4607_9430_af7f999d9c94.slice - libcontainer container kubepods-besteffort-pod9fb7073f_5e73_4607_9430_af7f999d9c94.slice. Jan 15 23:49:58.875649 containerd[1995]: time="2026-01-15T23:49:58.875513159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hscnf,Uid:9fb7073f-5e73-4607-9430-af7f999d9c94,Namespace:calico-system,Attempt:0,}" Jan 15 23:49:58.967514 containerd[1995]: time="2026-01-15T23:49:58.967356623Z" level=error msg="Failed to destroy network for sandbox \"c156c539317bb78d1e8cbf499cd1048290356363faa83e94eba5f1bf47820d83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.969721 containerd[1995]: time="2026-01-15T23:49:58.969569447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hscnf,Uid:9fb7073f-5e73-4607-9430-af7f999d9c94,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c156c539317bb78d1e8cbf499cd1048290356363faa83e94eba5f1bf47820d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.970270 kubelet[3591]: E0115 23:49:58.970219 3591 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c156c539317bb78d1e8cbf499cd1048290356363faa83e94eba5f1bf47820d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 15 23:49:58.970380 kubelet[3591]: E0115 23:49:58.970299 3591 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c156c539317bb78d1e8cbf499cd1048290356363faa83e94eba5f1bf47820d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hscnf" Jan 15 23:49:58.970380 kubelet[3591]: E0115 23:49:58.970339 3591 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c156c539317bb78d1e8cbf499cd1048290356363faa83e94eba5f1bf47820d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hscnf" Jan 15 23:49:58.970541 kubelet[3591]: E0115 23:49:58.970405 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c156c539317bb78d1e8cbf499cd1048290356363faa83e94eba5f1bf47820d83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:49:59.159834 systemd[1]: run-netns-cni\x2d82364745\x2d0ea4\x2d787f\x2d030b\x2d2cf560f4e6ab.mount: Deactivated successfully. Jan 15 23:49:59.160449 systemd[1]: run-netns-cni\x2d2a7cb56b\x2dd86e\x2d6996\x2d890d\x2d56398e3b8c59.mount: Deactivated successfully. Jan 15 23:49:59.160613 systemd[1]: run-netns-cni\x2d21cc801d\x2d74e9\x2d3215\x2d6be8\x2df63ad9e87070.mount: Deactivated successfully. Jan 15 23:50:05.873725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630283908.mount: Deactivated successfully. Jan 15 23:50:05.921942 containerd[1995]: time="2026-01-15T23:50:05.921854682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:05.924135 containerd[1995]: time="2026-01-15T23:50:05.923851278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 15 23:50:05.926420 containerd[1995]: time="2026-01-15T23:50:05.926352858Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:05.930894 containerd[1995]: time="2026-01-15T23:50:05.930842082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 15 23:50:05.932739 containerd[1995]: time="2026-01-15T23:50:05.932068158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.776228891s" Jan 15 23:50:05.932739 containerd[1995]: time="2026-01-15T23:50:05.932126610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 15 23:50:05.970239 containerd[1995]: time="2026-01-15T23:50:05.969743526Z" level=info msg="CreateContainer within sandbox \"a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 15 23:50:06.000762 containerd[1995]: time="2026-01-15T23:50:06.000689786Z" level=info msg="Container 09f3a70b2f21f99ec6f14c839d315df7b6374cf3e2ab2e7fb1b5ba3d647aae61: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:50:06.023241 containerd[1995]: time="2026-01-15T23:50:06.023166950Z" level=info msg="CreateContainer within sandbox \"a405443e43c5b9141c599e5a4f92fd0418cb76a705f066b05f6dac81f0259ae0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"09f3a70b2f21f99ec6f14c839d315df7b6374cf3e2ab2e7fb1b5ba3d647aae61\"" Jan 15 23:50:06.024330 containerd[1995]: time="2026-01-15T23:50:06.024275102Z" level=info msg="StartContainer for \"09f3a70b2f21f99ec6f14c839d315df7b6374cf3e2ab2e7fb1b5ba3d647aae61\"" Jan 15 23:50:06.027994 containerd[1995]: time="2026-01-15T23:50:06.027872198Z" level=info msg="connecting to shim 09f3a70b2f21f99ec6f14c839d315df7b6374cf3e2ab2e7fb1b5ba3d647aae61" address="unix:///run/containerd/s/f3bca27471d044d58232b8336b60ca0fc842a1eb1d65c2f373e6d5709b55d291" protocol=ttrpc version=3 Jan 15 23:50:06.110943 systemd[1]: Started cri-containerd-09f3a70b2f21f99ec6f14c839d315df7b6374cf3e2ab2e7fb1b5ba3d647aae61.scope - libcontainer container 09f3a70b2f21f99ec6f14c839d315df7b6374cf3e2ab2e7fb1b5ba3d647aae61. Jan 15 23:50:06.251638 containerd[1995]: time="2026-01-15T23:50:06.251576727Z" level=info msg="StartContainer for \"09f3a70b2f21f99ec6f14c839d315df7b6374cf3e2ab2e7fb1b5ba3d647aae61\" returns successfully" Jan 15 23:50:06.506108 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 15 23:50:06.506282 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 15 23:50:06.913530 kubelet[3591]: I0115 23:50:06.913081 3591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bdw9\" (UniqueName: \"kubernetes.io/projected/f0772df0-c398-4efd-9017-9dcc4fd8a789-kube-api-access-7bdw9\") pod \"f0772df0-c398-4efd-9017-9dcc4fd8a789\" (UID: \"f0772df0-c398-4efd-9017-9dcc4fd8a789\") " Jan 15 23:50:06.913530 kubelet[3591]: I0115 23:50:06.913187 3591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0772df0-c398-4efd-9017-9dcc4fd8a789-whisker-ca-bundle\") pod \"f0772df0-c398-4efd-9017-9dcc4fd8a789\" (UID: \"f0772df0-c398-4efd-9017-9dcc4fd8a789\") " Jan 15 23:50:06.913530 kubelet[3591]: I0115 23:50:06.913235 3591 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0772df0-c398-4efd-9017-9dcc4fd8a789-whisker-backend-key-pair\") pod \"f0772df0-c398-4efd-9017-9dcc4fd8a789\" (UID: \"f0772df0-c398-4efd-9017-9dcc4fd8a789\") " Jan 15 23:50:06.914987 kubelet[3591]: I0115 23:50:06.914810 3591 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0772df0-c398-4efd-9017-9dcc4fd8a789-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f0772df0-c398-4efd-9017-9dcc4fd8a789" (UID: "f0772df0-c398-4efd-9017-9dcc4fd8a789"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 15 23:50:06.927547 kubelet[3591]: I0115 23:50:06.926048 3591 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0772df0-c398-4efd-9017-9dcc4fd8a789-kube-api-access-7bdw9" (OuterVolumeSpecName: "kube-api-access-7bdw9") pod "f0772df0-c398-4efd-9017-9dcc4fd8a789" (UID: "f0772df0-c398-4efd-9017-9dcc4fd8a789"). InnerVolumeSpecName "kube-api-access-7bdw9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 15 23:50:06.932359 kubelet[3591]: I0115 23:50:06.929426 3591 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0772df0-c398-4efd-9017-9dcc4fd8a789-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f0772df0-c398-4efd-9017-9dcc4fd8a789" (UID: "f0772df0-c398-4efd-9017-9dcc4fd8a789"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 15 23:50:06.937977 systemd[1]: var-lib-kubelet-pods-f0772df0\x2dc398\x2d4efd\x2d9017\x2d9dcc4fd8a789-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7bdw9.mount: Deactivated successfully. Jan 15 23:50:06.938182 systemd[1]: var-lib-kubelet-pods-f0772df0\x2dc398\x2d4efd\x2d9017\x2d9dcc4fd8a789-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 15 23:50:07.014443 kubelet[3591]: I0115 23:50:07.014385 3591 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0772df0-c398-4efd-9017-9dcc4fd8a789-whisker-backend-key-pair\") on node \"ip-172-31-28-91\" DevicePath \"\"" Jan 15 23:50:07.014443 kubelet[3591]: I0115 23:50:07.014444 3591 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0772df0-c398-4efd-9017-9dcc4fd8a789-whisker-ca-bundle\") on node \"ip-172-31-28-91\" DevicePath \"\"" Jan 15 23:50:07.014708 kubelet[3591]: I0115 23:50:07.014582 3591 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bdw9\" (UniqueName: \"kubernetes.io/projected/f0772df0-c398-4efd-9017-9dcc4fd8a789-kube-api-access-7bdw9\") on node \"ip-172-31-28-91\" DevicePath \"\"" Jan 15 23:50:07.248693 systemd[1]: Removed slice kubepods-besteffort-podf0772df0_c398_4efd_9017_9dcc4fd8a789.slice - libcontainer container kubepods-besteffort-podf0772df0_c398_4efd_9017_9dcc4fd8a789.slice. Jan 15 23:50:07.279542 kubelet[3591]: I0115 23:50:07.279015 3591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mldkc" podStartSLOduration=2.600251303 podStartE2EDuration="21.27895042s" podCreationTimestamp="2026-01-15 23:49:46 +0000 UTC" firstStartedPulling="2026-01-15 23:49:47.254316777 +0000 UTC m=+34.640873681" lastFinishedPulling="2026-01-15 23:50:05.933015894 +0000 UTC m=+53.319572798" observedRunningTime="2026-01-15 23:50:07.274177876 +0000 UTC m=+54.660734780" watchObservedRunningTime="2026-01-15 23:50:07.27895042 +0000 UTC m=+54.665507360" Jan 15 23:50:07.423539 systemd[1]: Created slice kubepods-besteffort-pod4b2d9ae2_30d4_43cf_844d_a86d433a646c.slice - libcontainer container kubepods-besteffort-pod4b2d9ae2_30d4_43cf_844d_a86d433a646c.slice. Jan 15 23:50:07.520867 kubelet[3591]: I0115 23:50:07.520707 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnz7g\" (UniqueName: \"kubernetes.io/projected/4b2d9ae2-30d4-43cf-844d-a86d433a646c-kube-api-access-fnz7g\") pod \"whisker-6bbb75d98d-f8wxn\" (UID: \"4b2d9ae2-30d4-43cf-844d-a86d433a646c\") " pod="calico-system/whisker-6bbb75d98d-f8wxn" Jan 15 23:50:07.520867 kubelet[3591]: I0115 23:50:07.520788 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b2d9ae2-30d4-43cf-844d-a86d433a646c-whisker-ca-bundle\") pod \"whisker-6bbb75d98d-f8wxn\" (UID: \"4b2d9ae2-30d4-43cf-844d-a86d433a646c\") " pod="calico-system/whisker-6bbb75d98d-f8wxn" Jan 15 23:50:07.521142 kubelet[3591]: I0115 23:50:07.520878 3591 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4b2d9ae2-30d4-43cf-844d-a86d433a646c-whisker-backend-key-pair\") pod \"whisker-6bbb75d98d-f8wxn\" (UID: \"4b2d9ae2-30d4-43cf-844d-a86d433a646c\") " pod="calico-system/whisker-6bbb75d98d-f8wxn" Jan 15 23:50:07.731562 containerd[1995]: time="2026-01-15T23:50:07.731505259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bbb75d98d-f8wxn,Uid:4b2d9ae2-30d4-43cf-844d-a86d433a646c,Namespace:calico-system,Attempt:0,}" Jan 15 23:50:08.051748 (udev-worker)[4615]: Network interface NamePolicy= disabled on kernel command line. Jan 15 23:50:08.055206 systemd-networkd[1841]: cali90cda7613f5: Link UP Jan 15 23:50:08.055653 systemd-networkd[1841]: cali90cda7613f5: Gained carrier Jan 15 23:50:08.090606 containerd[1995]: 2026-01-15 23:50:07.779 [INFO][4667] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 23:50:08.090606 containerd[1995]: 2026-01-15 23:50:07.860 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0 whisker-6bbb75d98d- calico-system 4b2d9ae2-30d4-43cf-844d-a86d433a646c 931 0 2026-01-15 23:50:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bbb75d98d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-91 whisker-6bbb75d98d-f8wxn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali90cda7613f5 [] [] }} ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Namespace="calico-system" Pod="whisker-6bbb75d98d-f8wxn" WorkloadEndpoint="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-" Jan 15 23:50:08.090606 containerd[1995]: 2026-01-15 23:50:07.860 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Namespace="calico-system" Pod="whisker-6bbb75d98d-f8wxn" WorkloadEndpoint="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" Jan 15 23:50:08.090606 containerd[1995]: 2026-01-15 23:50:07.968 [INFO][4678] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" HandleID="k8s-pod-network.c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Workload="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:07.968 [INFO][4678] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" HandleID="k8s-pod-network.c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Workload="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8650), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-91", "pod":"whisker-6bbb75d98d-f8wxn", "timestamp":"2026-01-15 23:50:07.968647628 +0000 UTC"}, Hostname:"ip-172-31-28-91", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:07.968 [INFO][4678] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:07.969 [INFO][4678] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:07.969 [INFO][4678] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-91' Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:07.988 [INFO][4678] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" host="ip-172-31-28-91" Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:07.997 [INFO][4678] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-91" Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:08.005 [INFO][4678] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:08.008 [INFO][4678] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:08.011 [INFO][4678] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:08.090996 containerd[1995]: 2026-01-15 23:50:08.011 [INFO][4678] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" host="ip-172-31-28-91" Jan 15 23:50:08.091679 containerd[1995]: 2026-01-15 23:50:08.014 [INFO][4678] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493 Jan 15 23:50:08.091679 containerd[1995]: 2026-01-15 23:50:08.021 [INFO][4678] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" host="ip-172-31-28-91" Jan 15 23:50:08.091679 containerd[1995]: 2026-01-15 23:50:08.031 [INFO][4678] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.65/26] block=192.168.85.64/26 handle="k8s-pod-network.c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" host="ip-172-31-28-91" Jan 15 23:50:08.091679 containerd[1995]: 2026-01-15 23:50:08.031 [INFO][4678] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.65/26] handle="k8s-pod-network.c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" host="ip-172-31-28-91" Jan 15 23:50:08.091679 containerd[1995]: 2026-01-15 23:50:08.031 [INFO][4678] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:50:08.091679 containerd[1995]: 2026-01-15 23:50:08.032 [INFO][4678] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.65/26] IPv6=[] ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" HandleID="k8s-pod-network.c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Workload="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" Jan 15 23:50:08.091956 containerd[1995]: 2026-01-15 23:50:08.038 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Namespace="calico-system" Pod="whisker-6bbb75d98d-f8wxn" WorkloadEndpoint="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0", GenerateName:"whisker-6bbb75d98d-", Namespace:"calico-system", SelfLink:"", UID:"4b2d9ae2-30d4-43cf-844d-a86d433a646c", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 50, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bbb75d98d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"", Pod:"whisker-6bbb75d98d-f8wxn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90cda7613f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:08.091956 containerd[1995]: 2026-01-15 23:50:08.039 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.65/32] ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Namespace="calico-system" Pod="whisker-6bbb75d98d-f8wxn" WorkloadEndpoint="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" Jan 15 23:50:08.092132 containerd[1995]: 2026-01-15 23:50:08.039 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90cda7613f5 ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Namespace="calico-system" Pod="whisker-6bbb75d98d-f8wxn" WorkloadEndpoint="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" Jan 15 23:50:08.092132 containerd[1995]: 2026-01-15 23:50:08.057 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Namespace="calico-system" Pod="whisker-6bbb75d98d-f8wxn" WorkloadEndpoint="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" Jan 15 23:50:08.092230 containerd[1995]: 2026-01-15 23:50:08.060 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Namespace="calico-system" Pod="whisker-6bbb75d98d-f8wxn" WorkloadEndpoint="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0", GenerateName:"whisker-6bbb75d98d-", Namespace:"calico-system", SelfLink:"", UID:"4b2d9ae2-30d4-43cf-844d-a86d433a646c", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 50, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bbb75d98d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493", Pod:"whisker-6bbb75d98d-f8wxn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali90cda7613f5", MAC:"d2:b2:ff:e4:9d:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:08.092344 containerd[1995]: 2026-01-15 23:50:08.084 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" Namespace="calico-system" Pod="whisker-6bbb75d98d-f8wxn" WorkloadEndpoint="ip--172--31--28--91-k8s-whisker--6bbb75d98d--f8wxn-eth0" Jan 15 23:50:08.164505 containerd[1995]: time="2026-01-15T23:50:08.164372609Z" level=info msg="connecting to shim c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493" address="unix:///run/containerd/s/8c59aef284101ce4a8f094ecd13bd05fac61c77be534569febfaa7d84115c0e3" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:08.209820 systemd[1]: Started cri-containerd-c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493.scope - libcontainer container c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493. Jan 15 23:50:08.372930 containerd[1995]: time="2026-01-15T23:50:08.372711666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bbb75d98d-f8wxn,Uid:4b2d9ae2-30d4-43cf-844d-a86d433a646c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9f71b8fcb94e8e5ec8a69f7d0b4864304287a4be18d060db95af45e23ad0493\"" Jan 15 23:50:08.380565 containerd[1995]: time="2026-01-15T23:50:08.380499210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 23:50:08.688491 containerd[1995]: time="2026-01-15T23:50:08.688385527Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:08.690858 containerd[1995]: time="2026-01-15T23:50:08.690767839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 23:50:08.691802 containerd[1995]: time="2026-01-15T23:50:08.690775579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 15 23:50:08.691931 kubelet[3591]: E0115 23:50:08.691268 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:50:08.691931 kubelet[3591]: E0115 23:50:08.691336 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:50:08.698286 kubelet[3591]: E0115 23:50:08.698197 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bb2667bde4a84941ae0fd665fe854e1a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnz7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bbb75d98d-f8wxn_calico-system(4b2d9ae2-30d4-43cf-844d-a86d433a646c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:08.703975 containerd[1995]: time="2026-01-15T23:50:08.703883731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 23:50:08.863946 kubelet[3591]: I0115 23:50:08.863874 3591 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0772df0-c398-4efd-9017-9dcc4fd8a789" path="/var/lib/kubelet/pods/f0772df0-c398-4efd-9017-9dcc4fd8a789/volumes" Jan 15 23:50:08.866495 containerd[1995]: time="2026-01-15T23:50:08.866423516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5c5b4,Uid:6b6e1659-8c34-4e70-a449-e806105116b0,Namespace:kube-system,Attempt:0,}" Jan 15 23:50:08.985515 containerd[1995]: time="2026-01-15T23:50:08.985120929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:08.988680 containerd[1995]: time="2026-01-15T23:50:08.988607805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 15 23:50:08.988836 containerd[1995]: time="2026-01-15T23:50:08.988676265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 23:50:08.989278 kubelet[3591]: E0115 23:50:08.989192 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:50:08.989377 kubelet[3591]: E0115 23:50:08.989284 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:50:08.991794 kubelet[3591]: E0115 23:50:08.991641 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnz7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bbb75d98d-f8wxn_calico-system(4b2d9ae2-30d4-43cf-844d-a86d433a646c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:09.004260 kubelet[3591]: E0115 23:50:09.004159 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:50:09.231069 kubelet[3591]: E0115 23:50:09.230956 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:50:09.245386 systemd-networkd[1841]: cali607ef8ad2ba: Link UP Jan 15 23:50:09.249661 systemd-networkd[1841]: cali607ef8ad2ba: Gained carrier Jan 15 23:50:09.289081 containerd[1995]: 2026-01-15 23:50:08.961 [INFO][4851] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 15 23:50:09.289081 containerd[1995]: 2026-01-15 23:50:09.036 [INFO][4851] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0 coredns-668d6bf9bc- kube-system 6b6e1659-8c34-4e70-a449-e806105116b0 854 0 2026-01-15 23:49:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-91 coredns-668d6bf9bc-5c5b4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali607ef8ad2ba [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5c5b4" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-" Jan 15 23:50:09.289081 containerd[1995]: 2026-01-15 23:50:09.036 [INFO][4851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5c5b4" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" Jan 15 23:50:09.289081 containerd[1995]: 2026-01-15 23:50:09.136 [INFO][4864] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" HandleID="k8s-pod-network.d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Workload="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.138 [INFO][4864] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" HandleID="k8s-pod-network.d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Workload="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-91", "pod":"coredns-668d6bf9bc-5c5b4", "timestamp":"2026-01-15 23:50:09.136863474 +0000 UTC"}, Hostname:"ip-172-31-28-91", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.138 [INFO][4864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.138 [INFO][4864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.138 [INFO][4864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-91' Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.156 [INFO][4864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" host="ip-172-31-28-91" Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.165 [INFO][4864] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-91" Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.173 [INFO][4864] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.178 [INFO][4864] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.182 [INFO][4864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:09.289418 containerd[1995]: 2026-01-15 23:50:09.183 [INFO][4864] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" host="ip-172-31-28-91" Jan 15 23:50:09.291451 containerd[1995]: 2026-01-15 23:50:09.186 [INFO][4864] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb Jan 15 23:50:09.291451 containerd[1995]: 2026-01-15 23:50:09.199 [INFO][4864] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" host="ip-172-31-28-91" Jan 15 23:50:09.291451 containerd[1995]: 2026-01-15 23:50:09.211 [INFO][4864] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.66/26] block=192.168.85.64/26 handle="k8s-pod-network.d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" host="ip-172-31-28-91" Jan 15 23:50:09.291451 containerd[1995]: 2026-01-15 23:50:09.211 [INFO][4864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.66/26] handle="k8s-pod-network.d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" host="ip-172-31-28-91" Jan 15 23:50:09.291451 containerd[1995]: 2026-01-15 23:50:09.211 [INFO][4864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:50:09.291451 containerd[1995]: 2026-01-15 23:50:09.212 [INFO][4864] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.66/26] IPv6=[] ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" HandleID="k8s-pod-network.d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Workload="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" Jan 15 23:50:09.291796 containerd[1995]: 2026-01-15 23:50:09.228 [INFO][4851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5c5b4" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6b6e1659-8c34-4e70-a449-e806105116b0", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"", Pod:"coredns-668d6bf9bc-5c5b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali607ef8ad2ba", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:09.291796 containerd[1995]: 2026-01-15 23:50:09.228 [INFO][4851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.66/32] ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5c5b4" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" Jan 15 23:50:09.291796 containerd[1995]: 2026-01-15 23:50:09.228 [INFO][4851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali607ef8ad2ba ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5c5b4" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" Jan 15 23:50:09.291796 containerd[1995]: 2026-01-15 23:50:09.251 [INFO][4851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5c5b4" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" Jan 15 23:50:09.291796 containerd[1995]: 2026-01-15 23:50:09.254 [INFO][4851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5c5b4" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6b6e1659-8c34-4e70-a449-e806105116b0", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb", Pod:"coredns-668d6bf9bc-5c5b4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali607ef8ad2ba", MAC:"46:23:3d:5f:6b:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:09.291796 containerd[1995]: 2026-01-15 23:50:09.274 [INFO][4851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" Namespace="kube-system" Pod="coredns-668d6bf9bc-5c5b4" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--5c5b4-eth0" Jan 15 23:50:09.374532 containerd[1995]: time="2026-01-15T23:50:09.373873555Z" level=info msg="connecting to shim d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb" address="unix:///run/containerd/s/da56e0635f6e326bdc90feffdd7cf6659bf164dcb248fac6ae008abb654bdb10" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:09.458092 systemd[1]: Started cri-containerd-d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb.scope - libcontainer container d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb. Jan 15 23:50:09.618100 containerd[1995]: time="2026-01-15T23:50:09.618036464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5c5b4,Uid:6b6e1659-8c34-4e70-a449-e806105116b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb\"" Jan 15 23:50:09.628305 containerd[1995]: time="2026-01-15T23:50:09.628164656Z" level=info msg="CreateContainer within sandbox \"d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 23:50:09.658816 containerd[1995]: time="2026-01-15T23:50:09.657286340Z" level=info msg="Container c28005d792c5c6f84971fb4f87547d9e6027697d81d3bb8ab388d8c4f61428b1: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:50:09.677381 containerd[1995]: time="2026-01-15T23:50:09.677306132Z" level=info msg="CreateContainer within sandbox \"d128c552f16d3ad40dac2f415929359319499e504c7401b7ed65208af4d9e5bb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c28005d792c5c6f84971fb4f87547d9e6027697d81d3bb8ab388d8c4f61428b1\"" Jan 15 23:50:09.680820 containerd[1995]: time="2026-01-15T23:50:09.680726084Z" level=info msg="StartContainer for \"c28005d792c5c6f84971fb4f87547d9e6027697d81d3bb8ab388d8c4f61428b1\"" Jan 15 23:50:09.687180 containerd[1995]: time="2026-01-15T23:50:09.686972732Z" level=info msg="connecting to shim c28005d792c5c6f84971fb4f87547d9e6027697d81d3bb8ab388d8c4f61428b1" address="unix:///run/containerd/s/da56e0635f6e326bdc90feffdd7cf6659bf164dcb248fac6ae008abb654bdb10" protocol=ttrpc version=3 Jan 15 23:50:09.752801 systemd[1]: Started cri-containerd-c28005d792c5c6f84971fb4f87547d9e6027697d81d3bb8ab388d8c4f61428b1.scope - libcontainer container c28005d792c5c6f84971fb4f87547d9e6027697d81d3bb8ab388d8c4f61428b1. Jan 15 23:50:09.850498 containerd[1995]: time="2026-01-15T23:50:09.850243341Z" level=info msg="StartContainer for \"c28005d792c5c6f84971fb4f87547d9e6027697d81d3bb8ab388d8c4f61428b1\" returns successfully" Jan 15 23:50:09.857362 containerd[1995]: time="2026-01-15T23:50:09.857292993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fdbbb9d69-q7mqz,Uid:250297aa-f2ed-4da8-b086-a79052c5e783,Namespace:calico-system,Attempt:0,}" Jan 15 23:50:09.881066 systemd-networkd[1841]: cali90cda7613f5: Gained IPv6LL Jan 15 23:50:10.205502 systemd-networkd[1841]: calif6f62758ec1: Link UP Jan 15 23:50:10.207117 systemd-networkd[1841]: calif6f62758ec1: Gained carrier Jan 15 23:50:10.243068 kubelet[3591]: E0115 23:50:10.242807 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.004 [INFO][5007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0 calico-kube-controllers-5fdbbb9d69- calico-system 250297aa-f2ed-4da8-b086-a79052c5e783 859 0 2026-01-15 23:49:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fdbbb9d69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-91 calico-kube-controllers-5fdbbb9d69-q7mqz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif6f62758ec1 [] [] }} ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Namespace="calico-system" Pod="calico-kube-controllers-5fdbbb9d69-q7mqz" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.005 [INFO][5007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Namespace="calico-system" Pod="calico-kube-controllers-5fdbbb9d69-q7mqz" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.071 [INFO][5022] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" HandleID="k8s-pod-network.c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Workload="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.071 [INFO][5022] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" HandleID="k8s-pod-network.c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Workload="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000393840), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-91", "pod":"calico-kube-controllers-5fdbbb9d69-q7mqz", "timestamp":"2026-01-15 23:50:10.07134273 +0000 UTC"}, Hostname:"ip-172-31-28-91", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.072 [INFO][5022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.072 [INFO][5022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.072 [INFO][5022] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-91' Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.103 [INFO][5022] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" host="ip-172-31-28-91" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.135 [INFO][5022] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-91" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.151 [INFO][5022] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.156 [INFO][5022] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.160 [INFO][5022] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.160 [INFO][5022] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" host="ip-172-31-28-91" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.168 [INFO][5022] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6 Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.177 [INFO][5022] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" host="ip-172-31-28-91" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.193 [INFO][5022] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.67/26] block=192.168.85.64/26 handle="k8s-pod-network.c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" host="ip-172-31-28-91" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.193 [INFO][5022] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.67/26] handle="k8s-pod-network.c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" host="ip-172-31-28-91" Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.193 [INFO][5022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:50:10.251410 containerd[1995]: 2026-01-15 23:50:10.193 [INFO][5022] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.67/26] IPv6=[] ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" HandleID="k8s-pod-network.c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Workload="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" Jan 15 23:50:10.254372 containerd[1995]: 2026-01-15 23:50:10.198 [INFO][5007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Namespace="calico-system" Pod="calico-kube-controllers-5fdbbb9d69-q7mqz" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0", GenerateName:"calico-kube-controllers-5fdbbb9d69-", Namespace:"calico-system", SelfLink:"", UID:"250297aa-f2ed-4da8-b086-a79052c5e783", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fdbbb9d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"", Pod:"calico-kube-controllers-5fdbbb9d69-q7mqz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6f62758ec1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:10.254372 containerd[1995]: 2026-01-15 23:50:10.199 [INFO][5007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.67/32] ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Namespace="calico-system" Pod="calico-kube-controllers-5fdbbb9d69-q7mqz" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" Jan 15 23:50:10.254372 containerd[1995]: 2026-01-15 23:50:10.199 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6f62758ec1 ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Namespace="calico-system" Pod="calico-kube-controllers-5fdbbb9d69-q7mqz" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" Jan 15 23:50:10.254372 containerd[1995]: 2026-01-15 23:50:10.207 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Namespace="calico-system" Pod="calico-kube-controllers-5fdbbb9d69-q7mqz" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" Jan 15 23:50:10.254372 containerd[1995]: 2026-01-15 23:50:10.208 [INFO][5007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Namespace="calico-system" Pod="calico-kube-controllers-5fdbbb9d69-q7mqz" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0", GenerateName:"calico-kube-controllers-5fdbbb9d69-", Namespace:"calico-system", SelfLink:"", UID:"250297aa-f2ed-4da8-b086-a79052c5e783", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fdbbb9d69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6", Pod:"calico-kube-controllers-5fdbbb9d69-q7mqz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif6f62758ec1", MAC:"36:37:1c:32:a3:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:10.254372 containerd[1995]: 2026-01-15 23:50:10.233 [INFO][5007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" Namespace="calico-system" Pod="calico-kube-controllers-5fdbbb9d69-q7mqz" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--kube--controllers--5fdbbb9d69--q7mqz-eth0" Jan 15 23:50:10.328521 containerd[1995]: time="2026-01-15T23:50:10.328063279Z" level=info msg="connecting to shim c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6" address="unix:///run/containerd/s/dd7699e7db3c01742a39d1fd031b9b5c5ca828550b8d29bbb540ec122818715d" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:10.387405 kubelet[3591]: I0115 23:50:10.387309 3591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5c5b4" podStartSLOduration=54.387283652 podStartE2EDuration="54.387283652s" podCreationTimestamp="2026-01-15 23:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:50:10.385429496 +0000 UTC m=+57.771986520" watchObservedRunningTime="2026-01-15 23:50:10.387283652 +0000 UTC m=+57.773840556" Jan 15 23:50:10.393011 systemd-networkd[1841]: cali607ef8ad2ba: Gained IPv6LL Jan 15 23:50:10.414298 systemd[1]: Started cri-containerd-c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6.scope - libcontainer container c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6. Jan 15 23:50:10.750142 containerd[1995]: time="2026-01-15T23:50:10.750018790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fdbbb9d69-q7mqz,Uid:250297aa-f2ed-4da8-b086-a79052c5e783,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0595a08e4a10749efaa3e794e7e5e39d5fef00be260487231b88b891f8a66d6\"" Jan 15 23:50:10.758164 containerd[1995]: time="2026-01-15T23:50:10.757810978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 23:50:10.857373 containerd[1995]: time="2026-01-15T23:50:10.857200222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c69b78f6b-zzmq4,Uid:3a3a871b-481b-4197-950a-9e2f48b0e53a,Namespace:calico-apiserver,Attempt:0,}" Jan 15 23:50:10.958737 systemd[1]: Started sshd@7-172.31.28.91:22-20.161.92.111:34920.service - OpenSSH per-connection server daemon (20.161.92.111:34920). Jan 15 23:50:11.044218 containerd[1995]: time="2026-01-15T23:50:11.043905355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:11.052450 containerd[1995]: time="2026-01-15T23:50:11.051511855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 23:50:11.052450 containerd[1995]: time="2026-01-15T23:50:11.051663379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 15 23:50:11.055282 kubelet[3591]: E0115 23:50:11.052190 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:50:11.055282 kubelet[3591]: E0115 23:50:11.052387 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:50:11.056083 kubelet[3591]: E0115 23:50:11.054349 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgmf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fdbbb9d69-q7mqz_calico-system(250297aa-f2ed-4da8-b086-a79052c5e783): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:11.060582 kubelet[3591]: E0115 23:50:11.059783 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:50:11.249041 kubelet[3591]: E0115 23:50:11.248638 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:50:11.364533 systemd-networkd[1841]: vxlan.calico: Link UP Jan 15 23:50:11.364556 systemd-networkd[1841]: vxlan.calico: Gained carrier Jan 15 23:50:11.412062 (udev-worker)[4613]: Network interface NamePolicy= disabled on kernel command line. Jan 15 23:50:11.537998 systemd-networkd[1841]: calif58d6485646: Link UP Jan 15 23:50:11.539842 systemd-networkd[1841]: calif58d6485646: Gained carrier Jan 15 23:50:11.559978 sshd[5124]: Accepted publickey for core from 20.161.92.111 port 34920 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:11.566185 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:11.582921 systemd-logind[1976]: New session 8 of user core. Jan 15 23:50:11.590761 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.047 [INFO][5109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0 calico-apiserver-6c69b78f6b- calico-apiserver 3a3a871b-481b-4197-950a-9e2f48b0e53a 861 0 2026-01-15 23:49:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c69b78f6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-91 calico-apiserver-6c69b78f6b-zzmq4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif58d6485646 [] [] }} ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-zzmq4" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.048 [INFO][5109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-zzmq4" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.196 [INFO][5131] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" HandleID="k8s-pod-network.de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Workload="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.197 [INFO][5131] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" HandleID="k8s-pod-network.de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Workload="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000338c80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-91", "pod":"calico-apiserver-6c69b78f6b-zzmq4", "timestamp":"2026-01-15 23:50:11.196668692 +0000 UTC"}, Hostname:"ip-172-31-28-91", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.197 [INFO][5131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.197 [INFO][5131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.198 [INFO][5131] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-91' Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.293 [INFO][5131] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" host="ip-172-31-28-91" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.369 [INFO][5131] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-91" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.406 [INFO][5131] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.438 [INFO][5131] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.455 [INFO][5131] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.456 [INFO][5131] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" host="ip-172-31-28-91" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.463 [INFO][5131] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.484 [INFO][5131] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" host="ip-172-31-28-91" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.521 [INFO][5131] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.68/26] block=192.168.85.64/26 handle="k8s-pod-network.de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" host="ip-172-31-28-91" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.522 [INFO][5131] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.68/26] handle="k8s-pod-network.de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" host="ip-172-31-28-91" Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.523 [INFO][5131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:50:11.641595 containerd[1995]: 2026-01-15 23:50:11.523 [INFO][5131] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.68/26] IPv6=[] ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" HandleID="k8s-pod-network.de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Workload="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" Jan 15 23:50:11.643806 containerd[1995]: 2026-01-15 23:50:11.529 [INFO][5109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-zzmq4" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0", GenerateName:"calico-apiserver-6c69b78f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a3a871b-481b-4197-950a-9e2f48b0e53a", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c69b78f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"", Pod:"calico-apiserver-6c69b78f6b-zzmq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif58d6485646", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:11.643806 containerd[1995]: 2026-01-15 23:50:11.530 [INFO][5109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.68/32] ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-zzmq4" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" Jan 15 23:50:11.643806 containerd[1995]: 2026-01-15 23:50:11.530 [INFO][5109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif58d6485646 ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-zzmq4" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" Jan 15 23:50:11.643806 containerd[1995]: 2026-01-15 23:50:11.544 [INFO][5109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-zzmq4" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" Jan 15 23:50:11.643806 containerd[1995]: 2026-01-15 23:50:11.548 [INFO][5109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-zzmq4" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0", GenerateName:"calico-apiserver-6c69b78f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a3a871b-481b-4197-950a-9e2f48b0e53a", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c69b78f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d", Pod:"calico-apiserver-6c69b78f6b-zzmq4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif58d6485646", MAC:"6e:e5:0a:c1:81:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:11.643806 containerd[1995]: 2026-01-15 23:50:11.633 [INFO][5109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-zzmq4" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--zzmq4-eth0" Jan 15 23:50:11.722002 containerd[1995]: time="2026-01-15T23:50:11.720586666Z" level=info msg="connecting to shim de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d" address="unix:///run/containerd/s/82a6ce7b1daf5d3edc96be1a0ac81e5b1f5b39c18e32d69d5766f13c7b846e91" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:11.817943 systemd[1]: Started cri-containerd-de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d.scope - libcontainer container de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d. Jan 15 23:50:11.857785 containerd[1995]: time="2026-01-15T23:50:11.857725475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zcqlh,Uid:7292bea6-012f-4e29-ba2d-73a4ea488a56,Namespace:calico-system,Attempt:0,}" Jan 15 23:50:11.865820 systemd-networkd[1841]: calif6f62758ec1: Gained IPv6LL Jan 15 23:50:12.129373 containerd[1995]: time="2026-01-15T23:50:12.129307880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c69b78f6b-zzmq4,Uid:3a3a871b-481b-4197-950a-9e2f48b0e53a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"de5026d93d6eae67756089552331c7fd0439ea594f312df7541408352fa2ee9d\"" Jan 15 23:50:12.137514 containerd[1995]: time="2026-01-15T23:50:12.136179860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:50:12.262875 kubelet[3591]: E0115 23:50:12.262810 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:50:12.329079 sshd[5162]: Connection closed by 20.161.92.111 port 34920 Jan 15 23:50:12.330798 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:12.345726 systemd[1]: sshd@7-172.31.28.91:22-20.161.92.111:34920.service: Deactivated successfully. Jan 15 23:50:12.356345 systemd[1]: session-8.scope: Deactivated successfully. Jan 15 23:50:12.359799 systemd-logind[1976]: Session 8 logged out. Waiting for processes to exit. Jan 15 23:50:12.365317 systemd-logind[1976]: Removed session 8. Jan 15 23:50:12.388612 systemd-networkd[1841]: cali6a9506332aa: Link UP Jan 15 23:50:12.390668 systemd-networkd[1841]: cali6a9506332aa: Gained carrier Jan 15 23:50:12.405945 containerd[1995]: time="2026-01-15T23:50:12.404952982Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:12.408515 containerd[1995]: time="2026-01-15T23:50:12.408051850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:50:12.411983 containerd[1995]: time="2026-01-15T23:50:12.408021790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:50:12.414226 kubelet[3591]: E0115 23:50:12.413858 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:12.414226 kubelet[3591]: E0115 23:50:12.413925 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:12.414226 kubelet[3591]: E0115 23:50:12.414113 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w54nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c69b78f6b-zzmq4_calico-apiserver(3a3a871b-481b-4197-950a-9e2f48b0e53a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:12.416399 kubelet[3591]: E0115 23:50:12.416054 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" podUID="3a3a871b-481b-4197-950a-9e2f48b0e53a" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.064 [INFO][5214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0 goldmane-666569f655- calico-system 7292bea6-012f-4e29-ba2d-73a4ea488a56 863 0 2026-01-15 23:49:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-91 goldmane-666569f655-zcqlh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6a9506332aa [] [] }} ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Namespace="calico-system" Pod="goldmane-666569f655-zcqlh" WorkloadEndpoint="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.065 [INFO][5214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Namespace="calico-system" Pod="goldmane-666569f655-zcqlh" WorkloadEndpoint="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.202 [INFO][5229] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" HandleID="k8s-pod-network.126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Workload="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.203 [INFO][5229] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" HandleID="k8s-pod-network.126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Workload="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-91", "pod":"goldmane-666569f655-zcqlh", "timestamp":"2026-01-15 23:50:12.201964413 +0000 UTC"}, Hostname:"ip-172-31-28-91", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.203 [INFO][5229] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.203 [INFO][5229] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.203 [INFO][5229] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-91' Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.243 [INFO][5229] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" host="ip-172-31-28-91" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.274 [INFO][5229] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-91" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.299 [INFO][5229] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.306 [INFO][5229] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.313 [INFO][5229] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.314 [INFO][5229] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" host="ip-172-31-28-91" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.320 [INFO][5229] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862 Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.343 [INFO][5229] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" host="ip-172-31-28-91" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.374 [INFO][5229] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.69/26] block=192.168.85.64/26 handle="k8s-pod-network.126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" host="ip-172-31-28-91" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.374 [INFO][5229] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.69/26] handle="k8s-pod-network.126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" host="ip-172-31-28-91" Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.374 [INFO][5229] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:50:12.426747 containerd[1995]: 2026-01-15 23:50:12.374 [INFO][5229] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.69/26] IPv6=[] ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" HandleID="k8s-pod-network.126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Workload="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" Jan 15 23:50:12.430516 containerd[1995]: 2026-01-15 23:50:12.380 [INFO][5214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Namespace="calico-system" Pod="goldmane-666569f655-zcqlh" WorkloadEndpoint="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7292bea6-012f-4e29-ba2d-73a4ea488a56", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"", Pod:"goldmane-666569f655-zcqlh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6a9506332aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:12.430516 containerd[1995]: 2026-01-15 23:50:12.380 [INFO][5214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.69/32] ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Namespace="calico-system" Pod="goldmane-666569f655-zcqlh" WorkloadEndpoint="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" Jan 15 23:50:12.430516 containerd[1995]: 2026-01-15 23:50:12.381 [INFO][5214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a9506332aa ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Namespace="calico-system" Pod="goldmane-666569f655-zcqlh" WorkloadEndpoint="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" Jan 15 23:50:12.430516 containerd[1995]: 2026-01-15 23:50:12.391 [INFO][5214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Namespace="calico-system" Pod="goldmane-666569f655-zcqlh" WorkloadEndpoint="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" Jan 15 23:50:12.430516 containerd[1995]: 2026-01-15 23:50:12.391 [INFO][5214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Namespace="calico-system" Pod="goldmane-666569f655-zcqlh" WorkloadEndpoint="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7292bea6-012f-4e29-ba2d-73a4ea488a56", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862", Pod:"goldmane-666569f655-zcqlh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6a9506332aa", MAC:"8a:79:b1:89:ec:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:12.430516 containerd[1995]: 2026-01-15 23:50:12.408 [INFO][5214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" Namespace="calico-system" Pod="goldmane-666569f655-zcqlh" WorkloadEndpoint="ip--172--31--28--91-k8s-goldmane--666569f655--zcqlh-eth0" Jan 15 23:50:12.521825 containerd[1995]: time="2026-01-15T23:50:12.521700346Z" level=info msg="connecting to shim 126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862" address="unix:///run/containerd/s/daa4100a58611b6fa25e9335b3fe35fa04edbef84f6dc3b4be90e272636bb83c" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:12.598073 systemd[1]: Started cri-containerd-126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862.scope - libcontainer container 126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862. Jan 15 23:50:12.632903 systemd-networkd[1841]: vxlan.calico: Gained IPv6LL Jan 15 23:50:12.798937 containerd[1995]: time="2026-01-15T23:50:12.798733128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zcqlh,Uid:7292bea6-012f-4e29-ba2d-73a4ea488a56,Namespace:calico-system,Attempt:0,} returns sandbox id \"126f903e80c48298908e2a3c7a9876d554e64d72d3f551d5ac40b855f3116862\"" Jan 15 23:50:12.810449 containerd[1995]: time="2026-01-15T23:50:12.810386700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 23:50:12.859853 containerd[1995]: time="2026-01-15T23:50:12.859794384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c69b78f6b-96q62,Uid:5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0,Namespace:calico-apiserver,Attempt:0,}" Jan 15 23:50:13.080206 containerd[1995]: time="2026-01-15T23:50:13.079775181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:13.083211 containerd[1995]: time="2026-01-15T23:50:13.083094789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 23:50:13.084575 containerd[1995]: time="2026-01-15T23:50:13.083158533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 15 23:50:13.084876 kubelet[3591]: E0115 23:50:13.084822 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:50:13.085302 kubelet[3591]: E0115 23:50:13.084992 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:50:13.085302 kubelet[3591]: E0115 23:50:13.085210 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zcqlh_calico-system(7292bea6-012f-4e29-ba2d-73a4ea488a56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:13.087269 kubelet[3591]: E0115 23:50:13.086987 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:50:13.122374 systemd-networkd[1841]: cali1f27fdd02f7: Link UP Jan 15 23:50:13.123782 systemd-networkd[1841]: cali1f27fdd02f7: Gained carrier Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:12.971 [INFO][5342] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0 calico-apiserver-6c69b78f6b- calico-apiserver 5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0 864 0 2026-01-15 23:49:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c69b78f6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-91 calico-apiserver-6c69b78f6b-96q62 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f27fdd02f7 [] [] }} ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-96q62" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:12.972 [INFO][5342] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-96q62" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.027 [INFO][5355] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" HandleID="k8s-pod-network.22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Workload="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.028 [INFO][5355] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" HandleID="k8s-pod-network.22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Workload="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb7c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-91", "pod":"calico-apiserver-6c69b78f6b-96q62", "timestamp":"2026-01-15 23:50:13.027892041 +0000 UTC"}, Hostname:"ip-172-31-28-91", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.028 [INFO][5355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.028 [INFO][5355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.028 [INFO][5355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-91' Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.049 [INFO][5355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" host="ip-172-31-28-91" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.067 [INFO][5355] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-91" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.076 [INFO][5355] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.080 [INFO][5355] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.087 [INFO][5355] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.087 [INFO][5355] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" host="ip-172-31-28-91" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.090 [INFO][5355] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95 Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.098 [INFO][5355] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" host="ip-172-31-28-91" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.112 [INFO][5355] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.70/26] block=192.168.85.64/26 handle="k8s-pod-network.22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" host="ip-172-31-28-91" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.112 [INFO][5355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.70/26] handle="k8s-pod-network.22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" host="ip-172-31-28-91" Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.112 [INFO][5355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:50:13.155678 containerd[1995]: 2026-01-15 23:50:13.112 [INFO][5355] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.70/26] IPv6=[] ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" HandleID="k8s-pod-network.22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Workload="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" Jan 15 23:50:13.158700 containerd[1995]: 2026-01-15 23:50:13.116 [INFO][5342] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-96q62" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0", GenerateName:"calico-apiserver-6c69b78f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c69b78f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"", Pod:"calico-apiserver-6c69b78f6b-96q62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f27fdd02f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:13.158700 containerd[1995]: 2026-01-15 23:50:13.116 [INFO][5342] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.70/32] ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-96q62" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" Jan 15 23:50:13.158700 containerd[1995]: 2026-01-15 23:50:13.116 [INFO][5342] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f27fdd02f7 ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-96q62" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" Jan 15 23:50:13.158700 containerd[1995]: 2026-01-15 23:50:13.124 [INFO][5342] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-96q62" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" Jan 15 23:50:13.158700 containerd[1995]: 2026-01-15 23:50:13.125 [INFO][5342] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-96q62" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0", GenerateName:"calico-apiserver-6c69b78f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c69b78f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95", Pod:"calico-apiserver-6c69b78f6b-96q62", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f27fdd02f7", MAC:"5e:02:29:67:2d:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:13.158700 containerd[1995]: 2026-01-15 23:50:13.148 [INFO][5342] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" Namespace="calico-apiserver" Pod="calico-apiserver-6c69b78f6b-96q62" WorkloadEndpoint="ip--172--31--28--91-k8s-calico--apiserver--6c69b78f6b--96q62-eth0" Jan 15 23:50:13.207578 containerd[1995]: time="2026-01-15T23:50:13.207456382Z" level=info msg="connecting to shim 22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95" address="unix:///run/containerd/s/3fd7f999f319d5997f9ed274c733bcc03e94cacb6184467b9b41a55330cb31d7" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:13.269034 kubelet[3591]: E0115 23:50:13.268867 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" podUID="3a3a871b-481b-4197-950a-9e2f48b0e53a" Jan 15 23:50:13.271715 kubelet[3591]: E0115 23:50:13.270710 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:50:13.269767 systemd[1]: Started cri-containerd-22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95.scope - libcontainer container 22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95. Jan 15 23:50:13.443131 containerd[1995]: time="2026-01-15T23:50:13.443065163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c69b78f6b-96q62,Uid:5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"22cedef13cf333bf5a3b1f7969379d5db38f55786460bb96350a0739bf245d95\"" Jan 15 23:50:13.448344 containerd[1995]: time="2026-01-15T23:50:13.448282199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:50:13.528867 systemd-networkd[1841]: calif58d6485646: Gained IPv6LL Jan 15 23:50:13.657033 systemd-networkd[1841]: cali6a9506332aa: Gained IPv6LL Jan 15 23:50:13.698598 containerd[1995]: time="2026-01-15T23:50:13.698047452Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:13.700577 containerd[1995]: time="2026-01-15T23:50:13.700377852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:50:13.700577 containerd[1995]: time="2026-01-15T23:50:13.700418328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:50:13.700828 kubelet[3591]: E0115 23:50:13.700747 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:13.700905 kubelet[3591]: E0115 23:50:13.700820 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:13.701156 kubelet[3591]: E0115 23:50:13.700994 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgdjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c69b78f6b-96q62_calico-apiserver(5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:13.702244 kubelet[3591]: E0115 23:50:13.702180 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:50:13.858106 containerd[1995]: time="2026-01-15T23:50:13.858033997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9kccb,Uid:8c717cde-58a7-4b04-87d8-59853ebab9ea,Namespace:kube-system,Attempt:0,}" Jan 15 23:50:14.204373 systemd-networkd[1841]: cali1df57b9bb73: Link UP Jan 15 23:50:14.206700 systemd-networkd[1841]: cali1df57b9bb73: Gained carrier Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.008 [INFO][5421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0 coredns-668d6bf9bc- kube-system 8c717cde-58a7-4b04-87d8-59853ebab9ea 866 0 2026-01-15 23:49:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-91 coredns-668d6bf9bc-9kccb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1df57b9bb73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kccb" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.008 [INFO][5421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kccb" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.076 [INFO][5432] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" HandleID="k8s-pod-network.0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Workload="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.077 [INFO][5432] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" HandleID="k8s-pod-network.0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Workload="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb200), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-91", "pod":"coredns-668d6bf9bc-9kccb", "timestamp":"2026-01-15 23:50:14.076978702 +0000 UTC"}, Hostname:"ip-172-31-28-91", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.077 [INFO][5432] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.077 [INFO][5432] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.077 [INFO][5432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-91' Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.099 [INFO][5432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" host="ip-172-31-28-91" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.111 [INFO][5432] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-91" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.136 [INFO][5432] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.143 [INFO][5432] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.151 [INFO][5432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.151 [INFO][5432] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" host="ip-172-31-28-91" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.155 [INFO][5432] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64 Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.170 [INFO][5432] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" host="ip-172-31-28-91" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.190 [INFO][5432] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.71/26] block=192.168.85.64/26 handle="k8s-pod-network.0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" host="ip-172-31-28-91" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.190 [INFO][5432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.71/26] handle="k8s-pod-network.0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" host="ip-172-31-28-91" Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.190 [INFO][5432] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:50:14.243540 containerd[1995]: 2026-01-15 23:50:14.191 [INFO][5432] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.71/26] IPv6=[] ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" HandleID="k8s-pod-network.0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Workload="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" Jan 15 23:50:14.246143 containerd[1995]: 2026-01-15 23:50:14.195 [INFO][5421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kccb" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c717cde-58a7-4b04-87d8-59853ebab9ea", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"", Pod:"coredns-668d6bf9bc-9kccb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1df57b9bb73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:14.246143 containerd[1995]: 2026-01-15 23:50:14.195 [INFO][5421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.71/32] ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kccb" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" Jan 15 23:50:14.246143 containerd[1995]: 2026-01-15 23:50:14.195 [INFO][5421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1df57b9bb73 ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kccb" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" Jan 15 23:50:14.246143 containerd[1995]: 2026-01-15 23:50:14.208 [INFO][5421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kccb" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" Jan 15 23:50:14.246143 containerd[1995]: 2026-01-15 23:50:14.209 [INFO][5421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kccb" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8c717cde-58a7-4b04-87d8-59853ebab9ea", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64", Pod:"coredns-668d6bf9bc-9kccb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1df57b9bb73", MAC:"ee:3a:af:83:7e:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:14.246143 containerd[1995]: 2026-01-15 23:50:14.234 [INFO][5421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" Namespace="kube-system" Pod="coredns-668d6bf9bc-9kccb" WorkloadEndpoint="ip--172--31--28--91-k8s-coredns--668d6bf9bc--9kccb-eth0" Jan 15 23:50:14.281709 kubelet[3591]: E0115 23:50:14.280697 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:50:14.281709 kubelet[3591]: E0115 23:50:14.281583 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:50:14.323964 containerd[1995]: time="2026-01-15T23:50:14.323766911Z" level=info msg="connecting to shim 0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64" address="unix:///run/containerd/s/6b9e5eab800467a73a9194e5eda4552c84e98a5d8338c1236a43b554d70eeae0" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:14.421050 systemd[1]: Started cri-containerd-0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64.scope - libcontainer container 0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64. Jan 15 23:50:14.563539 containerd[1995]: time="2026-01-15T23:50:14.563382529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9kccb,Uid:8c717cde-58a7-4b04-87d8-59853ebab9ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64\"" Jan 15 23:50:14.573550 containerd[1995]: time="2026-01-15T23:50:14.572927869Z" level=info msg="CreateContainer within sandbox \"0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 15 23:50:14.601766 containerd[1995]: time="2026-01-15T23:50:14.601682449Z" level=info msg="Container 1346a6620cf0b326585059564961d9b59c9ba4c713d4a0c05abe0de5d8f514d2: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:50:14.626296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321885803.mount: Deactivated successfully. Jan 15 23:50:14.633526 containerd[1995]: time="2026-01-15T23:50:14.633360457Z" level=info msg="CreateContainer within sandbox \"0a16e76c92ada6634bc40d2c86e138f571954943162e79de61951a41d4c8cd64\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1346a6620cf0b326585059564961d9b59c9ba4c713d4a0c05abe0de5d8f514d2\"" Jan 15 23:50:14.635756 containerd[1995]: time="2026-01-15T23:50:14.635707201Z" level=info msg="StartContainer for \"1346a6620cf0b326585059564961d9b59c9ba4c713d4a0c05abe0de5d8f514d2\"" Jan 15 23:50:14.640312 containerd[1995]: time="2026-01-15T23:50:14.640125901Z" level=info msg="connecting to shim 1346a6620cf0b326585059564961d9b59c9ba4c713d4a0c05abe0de5d8f514d2" address="unix:///run/containerd/s/6b9e5eab800467a73a9194e5eda4552c84e98a5d8338c1236a43b554d70eeae0" protocol=ttrpc version=3 Jan 15 23:50:14.718882 systemd[1]: Started cri-containerd-1346a6620cf0b326585059564961d9b59c9ba4c713d4a0c05abe0de5d8f514d2.scope - libcontainer container 1346a6620cf0b326585059564961d9b59c9ba4c713d4a0c05abe0de5d8f514d2. Jan 15 23:50:14.812119 containerd[1995]: time="2026-01-15T23:50:14.812073002Z" level=info msg="StartContainer for \"1346a6620cf0b326585059564961d9b59c9ba4c713d4a0c05abe0de5d8f514d2\" returns successfully" Jan 15 23:50:14.858612 containerd[1995]: time="2026-01-15T23:50:14.858262610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hscnf,Uid:9fb7073f-5e73-4607-9430-af7f999d9c94,Namespace:calico-system,Attempt:0,}" Jan 15 23:50:15.135339 systemd-networkd[1841]: cali2a14b4560e7: Link UP Jan 15 23:50:15.144055 systemd-networkd[1841]: cali2a14b4560e7: Gained carrier Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:14.973 [INFO][5531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0 csi-node-driver- calico-system 9fb7073f-5e73-4607-9430-af7f999d9c94 757 0 2026-01-15 23:49:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-91 csi-node-driver-hscnf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2a14b4560e7 [] [] }} ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Namespace="calico-system" Pod="csi-node-driver-hscnf" WorkloadEndpoint="ip--172--31--28--91-k8s-csi--node--driver--hscnf-" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:14.974 [INFO][5531] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Namespace="calico-system" Pod="csi-node-driver-hscnf" WorkloadEndpoint="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.035 [INFO][5544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" HandleID="k8s-pod-network.7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Workload="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.035 [INFO][5544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" HandleID="k8s-pod-network.7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Workload="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-91", "pod":"csi-node-driver-hscnf", "timestamp":"2026-01-15 23:50:15.035123027 +0000 UTC"}, Hostname:"ip-172-31-28-91", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.035 [INFO][5544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.035 [INFO][5544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.035 [INFO][5544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-91' Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.053 [INFO][5544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" host="ip-172-31-28-91" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.064 [INFO][5544] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-91" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.074 [INFO][5544] ipam/ipam.go 511: Trying affinity for 192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.077 [INFO][5544] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.082 [INFO][5544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.64/26 host="ip-172-31-28-91" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.082 [INFO][5544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.64/26 handle="k8s-pod-network.7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" host="ip-172-31-28-91" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.087 [INFO][5544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.095 [INFO][5544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.64/26 handle="k8s-pod-network.7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" host="ip-172-31-28-91" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.110 [INFO][5544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.72/26] block=192.168.85.64/26 handle="k8s-pod-network.7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" host="ip-172-31-28-91" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.110 [INFO][5544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.72/26] handle="k8s-pod-network.7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" host="ip-172-31-28-91" Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.110 [INFO][5544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 15 23:50:15.181146 containerd[1995]: 2026-01-15 23:50:15.110 [INFO][5544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.72/26] IPv6=[] ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" HandleID="k8s-pod-network.7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Workload="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" Jan 15 23:50:15.182374 containerd[1995]: 2026-01-15 23:50:15.116 [INFO][5531] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Namespace="calico-system" Pod="csi-node-driver-hscnf" WorkloadEndpoint="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fb7073f-5e73-4607-9430-af7f999d9c94", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"", Pod:"csi-node-driver-hscnf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a14b4560e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:15.182374 containerd[1995]: 2026-01-15 23:50:15.117 [INFO][5531] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.72/32] ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Namespace="calico-system" Pod="csi-node-driver-hscnf" WorkloadEndpoint="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" Jan 15 23:50:15.182374 containerd[1995]: 2026-01-15 23:50:15.117 [INFO][5531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a14b4560e7 ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Namespace="calico-system" Pod="csi-node-driver-hscnf" WorkloadEndpoint="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" Jan 15 23:50:15.182374 containerd[1995]: 2026-01-15 23:50:15.149 [INFO][5531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Namespace="calico-system" Pod="csi-node-driver-hscnf" WorkloadEndpoint="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" Jan 15 23:50:15.182374 containerd[1995]: 2026-01-15 23:50:15.151 [INFO][5531] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Namespace="calico-system" Pod="csi-node-driver-hscnf" WorkloadEndpoint="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9fb7073f-5e73-4607-9430-af7f999d9c94", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 15, 23, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-91", ContainerID:"7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac", Pod:"csi-node-driver-hscnf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a14b4560e7", MAC:"1a:b8:b3:02:41:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 15 23:50:15.182374 containerd[1995]: 2026-01-15 23:50:15.172 [INFO][5531] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" Namespace="calico-system" Pod="csi-node-driver-hscnf" WorkloadEndpoint="ip--172--31--28--91-k8s-csi--node--driver--hscnf-eth0" Jan 15 23:50:15.194594 systemd-networkd[1841]: cali1f27fdd02f7: Gained IPv6LL Jan 15 23:50:15.239157 containerd[1995]: time="2026-01-15T23:50:15.238991988Z" level=info msg="connecting to shim 7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac" address="unix:///run/containerd/s/5a006e1974c6a54cd9c1b015a3d87c6956323a8c8e56114562e68f1537156411" namespace=k8s.io protocol=ttrpc version=3 Jan 15 23:50:15.299985 kubelet[3591]: E0115 23:50:15.299379 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:50:15.337099 systemd[1]: Started cri-containerd-7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac.scope - libcontainer container 7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac. Jan 15 23:50:15.384724 systemd-networkd[1841]: cali1df57b9bb73: Gained IPv6LL Jan 15 23:50:15.425038 kubelet[3591]: I0115 23:50:15.424899 3591 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9kccb" podStartSLOduration=59.424871329 podStartE2EDuration="59.424871329s" podCreationTimestamp="2026-01-15 23:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-15 23:50:15.371324173 +0000 UTC m=+62.757881101" watchObservedRunningTime="2026-01-15 23:50:15.424871329 +0000 UTC m=+62.811428521" Jan 15 23:50:15.477491 containerd[1995]: time="2026-01-15T23:50:15.477413533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hscnf,Uid:9fb7073f-5e73-4607-9430-af7f999d9c94,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b2f044fcec430835612ef5c093b4feb9005672997e918db22ec081e9ead9fac\"" Jan 15 23:50:15.483256 containerd[1995]: time="2026-01-15T23:50:15.483180301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 23:50:15.754827 containerd[1995]: time="2026-01-15T23:50:15.754662842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:15.756917 containerd[1995]: time="2026-01-15T23:50:15.756812234Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 23:50:15.757163 containerd[1995]: time="2026-01-15T23:50:15.756871478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 15 23:50:15.757231 kubelet[3591]: E0115 23:50:15.757118 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:50:15.757231 kubelet[3591]: E0115 23:50:15.757176 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:50:15.757416 kubelet[3591]: E0115 23:50:15.757345 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:15.761379 containerd[1995]: time="2026-01-15T23:50:15.761333894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 23:50:16.095907 containerd[1995]: time="2026-01-15T23:50:16.095632224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:16.097892 containerd[1995]: time="2026-01-15T23:50:16.097807500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 23:50:16.098027 containerd[1995]: time="2026-01-15T23:50:16.097949364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 15 23:50:16.098377 kubelet[3591]: E0115 23:50:16.098310 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:50:16.098482 kubelet[3591]: E0115 23:50:16.098385 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:50:16.098670 kubelet[3591]: E0115 23:50:16.098596 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:16.100281 kubelet[3591]: E0115 23:50:16.100176 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:50:16.301049 kubelet[3591]: E0115 23:50:16.300917 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:50:16.474133 systemd-networkd[1841]: cali2a14b4560e7: Gained IPv6LL Jan 15 23:50:17.303665 kubelet[3591]: E0115 23:50:17.303564 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:50:17.426712 systemd[1]: Started sshd@8-172.31.28.91:22-20.161.92.111:52718.service - OpenSSH per-connection server daemon (20.161.92.111:52718). Jan 15 23:50:17.963530 sshd[5610]: Accepted publickey for core from 20.161.92.111 port 52718 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:17.967378 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:17.977903 systemd-logind[1976]: New session 9 of user core. Jan 15 23:50:17.984743 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 15 23:50:18.482543 sshd[5621]: Connection closed by 20.161.92.111 port 52718 Jan 15 23:50:18.484682 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:18.492686 systemd[1]: sshd@8-172.31.28.91:22-20.161.92.111:52718.service: Deactivated successfully. Jan 15 23:50:18.497508 systemd[1]: session-9.scope: Deactivated successfully. Jan 15 23:50:18.500064 systemd-logind[1976]: Session 9 logged out. Waiting for processes to exit. Jan 15 23:50:18.503729 systemd-logind[1976]: Removed session 9. Jan 15 23:50:18.893603 ntpd[2185]: Listen normally on 6 vxlan.calico 192.168.85.64:123 Jan 15 23:50:18.894710 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 6 vxlan.calico 192.168.85.64:123 Jan 15 23:50:18.894710 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 7 cali90cda7613f5 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 15 23:50:18.894710 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 8 cali607ef8ad2ba [fe80::ecee:eeff:feee:eeee%5]:123 Jan 15 23:50:18.894710 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 9 calif6f62758ec1 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 15 23:50:18.894710 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 10 vxlan.calico [fe80::643b:d8ff:fe7e:c8b6%7]:123 Jan 15 23:50:18.894710 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 11 calif58d6485646 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 15 23:50:18.894246 ntpd[2185]: Listen normally on 7 cali90cda7613f5 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 15 23:50:18.894293 ntpd[2185]: Listen normally on 8 cali607ef8ad2ba [fe80::ecee:eeff:feee:eeee%5]:123 Jan 15 23:50:18.894336 ntpd[2185]: Listen normally on 9 calif6f62758ec1 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 15 23:50:18.894381 ntpd[2185]: Listen normally on 10 vxlan.calico [fe80::643b:d8ff:fe7e:c8b6%7]:123 Jan 15 23:50:18.894429 ntpd[2185]: Listen normally on 11 calif58d6485646 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 15 23:50:18.895397 ntpd[2185]: Listen normally on 12 cali6a9506332aa [fe80::ecee:eeff:feee:eeee%11]:123 Jan 15 23:50:18.896138 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 12 cali6a9506332aa [fe80::ecee:eeff:feee:eeee%11]:123 Jan 15 23:50:18.896138 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 13 cali1f27fdd02f7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 15 23:50:18.896138 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 14 cali1df57b9bb73 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 15 23:50:18.896138 ntpd[2185]: 15 Jan 23:50:18 ntpd[2185]: Listen normally on 15 cali2a14b4560e7 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 15 23:50:18.895515 ntpd[2185]: Listen normally on 13 cali1f27fdd02f7 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 15 23:50:18.895570 ntpd[2185]: Listen normally on 14 cali1df57b9bb73 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 15 23:50:18.895614 ntpd[2185]: Listen normally on 15 cali2a14b4560e7 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 15 23:50:22.857784 containerd[1995]: time="2026-01-15T23:50:22.857713366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 23:50:23.109768 containerd[1995]: time="2026-01-15T23:50:23.109611175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:23.112708 containerd[1995]: time="2026-01-15T23:50:23.112587511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 23:50:23.112708 containerd[1995]: time="2026-01-15T23:50:23.112670287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 15 23:50:23.113005 kubelet[3591]: E0115 23:50:23.112928 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:50:23.115821 kubelet[3591]: E0115 23:50:23.113018 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:50:23.115821 kubelet[3591]: E0115 23:50:23.113298 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bb2667bde4a84941ae0fd665fe854e1a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnz7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bbb75d98d-f8wxn_calico-system(4b2d9ae2-30d4-43cf-844d-a86d433a646c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:23.118097 containerd[1995]: time="2026-01-15T23:50:23.117958915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 23:50:23.380396 containerd[1995]: time="2026-01-15T23:50:23.380217500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:23.383014 containerd[1995]: time="2026-01-15T23:50:23.382929800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 23:50:23.383520 containerd[1995]: time="2026-01-15T23:50:23.382979612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 15 23:50:23.383629 kubelet[3591]: E0115 23:50:23.383332 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:50:23.383629 kubelet[3591]: E0115 23:50:23.383393 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:50:23.384269 kubelet[3591]: E0115 23:50:23.384160 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnz7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bbb75d98d-f8wxn_calico-system(4b2d9ae2-30d4-43cf-844d-a86d433a646c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:23.385631 kubelet[3591]: E0115 23:50:23.385544 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:50:23.578383 systemd[1]: Started sshd@9-172.31.28.91:22-20.161.92.111:40922.service - OpenSSH per-connection server daemon (20.161.92.111:40922). Jan 15 23:50:24.097403 sshd[5645]: Accepted publickey for core from 20.161.92.111 port 40922 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:24.100324 sshd-session[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:24.108171 systemd-logind[1976]: New session 10 of user core. Jan 15 23:50:24.115705 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 15 23:50:24.584224 sshd[5648]: Connection closed by 20.161.92.111 port 40922 Jan 15 23:50:24.585134 sshd-session[5645]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:24.594000 systemd[1]: sshd@9-172.31.28.91:22-20.161.92.111:40922.service: Deactivated successfully. Jan 15 23:50:24.600239 systemd[1]: session-10.scope: Deactivated successfully. Jan 15 23:50:24.602359 systemd-logind[1976]: Session 10 logged out. Waiting for processes to exit. Jan 15 23:50:24.605216 systemd-logind[1976]: Removed session 10. Jan 15 23:50:24.677844 systemd[1]: Started sshd@10-172.31.28.91:22-20.161.92.111:40932.service - OpenSSH per-connection server daemon (20.161.92.111:40932). Jan 15 23:50:25.198675 sshd[5661]: Accepted publickey for core from 20.161.92.111 port 40932 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:25.200960 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:25.208988 systemd-logind[1976]: New session 11 of user core. Jan 15 23:50:25.217755 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 15 23:50:25.777005 sshd[5664]: Connection closed by 20.161.92.111 port 40932 Jan 15 23:50:25.777742 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:25.785340 systemd[1]: sshd@10-172.31.28.91:22-20.161.92.111:40932.service: Deactivated successfully. Jan 15 23:50:25.789345 systemd[1]: session-11.scope: Deactivated successfully. Jan 15 23:50:25.793947 systemd-logind[1976]: Session 11 logged out. Waiting for processes to exit. Jan 15 23:50:25.797176 systemd-logind[1976]: Removed session 11. Jan 15 23:50:25.858786 containerd[1995]: time="2026-01-15T23:50:25.858069073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 23:50:25.872548 systemd[1]: Started sshd@11-172.31.28.91:22-20.161.92.111:40934.service - OpenSSH per-connection server daemon (20.161.92.111:40934). Jan 15 23:50:26.123737 containerd[1995]: time="2026-01-15T23:50:26.123037918Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:26.128648 containerd[1995]: time="2026-01-15T23:50:26.128519338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 23:50:26.128648 containerd[1995]: time="2026-01-15T23:50:26.128593630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 15 23:50:26.128931 kubelet[3591]: E0115 23:50:26.128824 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:50:26.128931 kubelet[3591]: E0115 23:50:26.128888 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:50:26.129951 kubelet[3591]: E0115 23:50:26.129579 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgmf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fdbbb9d69-q7mqz_calico-system(250297aa-f2ed-4da8-b086-a79052c5e783): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:26.130925 kubelet[3591]: E0115 23:50:26.130847 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:50:26.404576 sshd[5677]: Accepted publickey for core from 20.161.92.111 port 40934 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:26.406757 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:26.416222 systemd-logind[1976]: New session 12 of user core. Jan 15 23:50:26.430716 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 15 23:50:26.858965 containerd[1995]: time="2026-01-15T23:50:26.858737186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:50:26.882596 sshd[5680]: Connection closed by 20.161.92.111 port 40934 Jan 15 23:50:26.884239 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:26.894559 systemd-logind[1976]: Session 12 logged out. Waiting for processes to exit. Jan 15 23:50:26.896358 systemd[1]: sshd@11-172.31.28.91:22-20.161.92.111:40934.service: Deactivated successfully. Jan 15 23:50:26.903028 systemd[1]: session-12.scope: Deactivated successfully. Jan 15 23:50:26.906607 systemd-logind[1976]: Removed session 12. Jan 15 23:50:27.182379 containerd[1995]: time="2026-01-15T23:50:27.182172683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:27.184270 containerd[1995]: time="2026-01-15T23:50:27.184143599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:50:27.184270 containerd[1995]: time="2026-01-15T23:50:27.184225559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:50:27.184697 kubelet[3591]: E0115 23:50:27.184622 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:27.185549 kubelet[3591]: E0115 23:50:27.185217 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:27.185549 kubelet[3591]: E0115 23:50:27.185419 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w54nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c69b78f6b-zzmq4_calico-apiserver(3a3a871b-481b-4197-950a-9e2f48b0e53a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:27.186779 kubelet[3591]: E0115 23:50:27.186708 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" podUID="3a3a871b-481b-4197-950a-9e2f48b0e53a" Jan 15 23:50:28.859457 containerd[1995]: time="2026-01-15T23:50:28.859384576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 23:50:29.114407 containerd[1995]: time="2026-01-15T23:50:29.114249313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:29.116547 containerd[1995]: time="2026-01-15T23:50:29.116423137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 23:50:29.116547 containerd[1995]: time="2026-01-15T23:50:29.116496733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 15 23:50:29.116843 kubelet[3591]: E0115 23:50:29.116737 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:50:29.116843 kubelet[3591]: E0115 23:50:29.116796 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:50:29.117511 kubelet[3591]: E0115 23:50:29.117111 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:29.118767 containerd[1995]: time="2026-01-15T23:50:29.118704325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 23:50:29.368756 containerd[1995]: time="2026-01-15T23:50:29.368598830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:29.371117 containerd[1995]: time="2026-01-15T23:50:29.371038670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 23:50:29.371689 containerd[1995]: time="2026-01-15T23:50:29.371200190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 15 23:50:29.371799 kubelet[3591]: E0115 23:50:29.371327 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:50:29.371799 kubelet[3591]: E0115 23:50:29.371385 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:50:29.371799 kubelet[3591]: E0115 23:50:29.371643 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zcqlh_calico-system(7292bea6-012f-4e29-ba2d-73a4ea488a56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:29.373005 containerd[1995]: time="2026-01-15T23:50:29.372623534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 23:50:29.373147 kubelet[3591]: E0115 23:50:29.373040 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:50:29.648690 containerd[1995]: time="2026-01-15T23:50:29.648527667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:29.650815 containerd[1995]: time="2026-01-15T23:50:29.650707359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 23:50:29.650815 containerd[1995]: time="2026-01-15T23:50:29.650777163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 15 23:50:29.651194 kubelet[3591]: E0115 23:50:29.651026 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:50:29.651194 kubelet[3591]: E0115 23:50:29.651087 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:50:29.651597 kubelet[3591]: E0115 23:50:29.651244 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:29.652924 kubelet[3591]: E0115 23:50:29.652850 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:50:29.857638 containerd[1995]: time="2026-01-15T23:50:29.857498056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:50:30.099053 containerd[1995]: time="2026-01-15T23:50:30.098986598Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:30.101985 containerd[1995]: time="2026-01-15T23:50:30.101835194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:50:30.101985 containerd[1995]: time="2026-01-15T23:50:30.101905562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:50:30.102218 kubelet[3591]: E0115 23:50:30.102136 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:30.102333 kubelet[3591]: E0115 23:50:30.102212 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:30.102505 kubelet[3591]: E0115 23:50:30.102378 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgdjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c69b78f6b-96q62_calico-apiserver(5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:30.105432 kubelet[3591]: E0115 23:50:30.105361 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:50:31.976046 systemd[1]: Started sshd@12-172.31.28.91:22-20.161.92.111:40944.service - OpenSSH per-connection server daemon (20.161.92.111:40944). Jan 15 23:50:32.493172 sshd[5697]: Accepted publickey for core from 20.161.92.111 port 40944 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:32.495650 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:32.503413 systemd-logind[1976]: New session 13 of user core. Jan 15 23:50:32.514755 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 15 23:50:32.994926 sshd[5700]: Connection closed by 20.161.92.111 port 40944 Jan 15 23:50:32.997756 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:33.005106 systemd-logind[1976]: Session 13 logged out. Waiting for processes to exit. Jan 15 23:50:33.006300 systemd[1]: sshd@12-172.31.28.91:22-20.161.92.111:40944.service: Deactivated successfully. Jan 15 23:50:33.011242 systemd[1]: session-13.scope: Deactivated successfully. Jan 15 23:50:33.017733 systemd-logind[1976]: Removed session 13. Jan 15 23:50:37.857561 kubelet[3591]: E0115 23:50:37.857419 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:50:37.860222 kubelet[3591]: E0115 23:50:37.860148 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:50:38.093944 systemd[1]: Started sshd@13-172.31.28.91:22-20.161.92.111:55874.service - OpenSSH per-connection server daemon (20.161.92.111:55874). Jan 15 23:50:38.613787 sshd[5720]: Accepted publickey for core from 20.161.92.111 port 55874 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:38.616190 sshd-session[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:38.623616 systemd-logind[1976]: New session 14 of user core. Jan 15 23:50:38.632714 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 15 23:50:39.161382 sshd[5723]: Connection closed by 20.161.92.111 port 55874 Jan 15 23:50:39.161887 sshd-session[5720]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:39.172283 systemd[1]: sshd@13-172.31.28.91:22-20.161.92.111:55874.service: Deactivated successfully. Jan 15 23:50:39.180654 systemd[1]: session-14.scope: Deactivated successfully. Jan 15 23:50:39.184545 systemd-logind[1976]: Session 14 logged out. Waiting for processes to exit. Jan 15 23:50:39.188687 systemd-logind[1976]: Removed session 14. Jan 15 23:50:40.859944 kubelet[3591]: E0115 23:50:40.858593 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:50:42.863924 kubelet[3591]: E0115 23:50:42.861656 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:50:42.863924 kubelet[3591]: E0115 23:50:42.863075 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:50:42.863924 kubelet[3591]: E0115 23:50:42.863207 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" podUID="3a3a871b-481b-4197-950a-9e2f48b0e53a" Jan 15 23:50:44.266270 systemd[1]: Started sshd@14-172.31.28.91:22-20.161.92.111:40744.service - OpenSSH per-connection server daemon (20.161.92.111:40744). Jan 15 23:50:44.791484 sshd[5762]: Accepted publickey for core from 20.161.92.111 port 40744 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:44.793783 sshd-session[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:44.801962 systemd-logind[1976]: New session 15 of user core. Jan 15 23:50:44.814730 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 15 23:50:45.290232 sshd[5765]: Connection closed by 20.161.92.111 port 40744 Jan 15 23:50:45.291085 sshd-session[5762]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:45.302180 systemd-logind[1976]: Session 15 logged out. Waiting for processes to exit. Jan 15 23:50:45.302458 systemd[1]: sshd@14-172.31.28.91:22-20.161.92.111:40744.service: Deactivated successfully. Jan 15 23:50:45.308228 systemd[1]: session-15.scope: Deactivated successfully. Jan 15 23:50:45.313056 systemd-logind[1976]: Removed session 15. Jan 15 23:50:45.385639 systemd[1]: Started sshd@15-172.31.28.91:22-20.161.92.111:40760.service - OpenSSH per-connection server daemon (20.161.92.111:40760). Jan 15 23:50:45.914818 sshd[5778]: Accepted publickey for core from 20.161.92.111 port 40760 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:45.917896 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:45.925917 systemd-logind[1976]: New session 16 of user core. Jan 15 23:50:45.936763 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 15 23:50:46.648502 sshd[5781]: Connection closed by 20.161.92.111 port 40760 Jan 15 23:50:46.651545 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:46.662618 systemd[1]: sshd@15-172.31.28.91:22-20.161.92.111:40760.service: Deactivated successfully. Jan 15 23:50:46.670838 systemd[1]: session-16.scope: Deactivated successfully. Jan 15 23:50:46.674340 systemd-logind[1976]: Session 16 logged out. Waiting for processes to exit. Jan 15 23:50:46.679556 systemd-logind[1976]: Removed session 16. Jan 15 23:50:46.743397 systemd[1]: Started sshd@16-172.31.28.91:22-20.161.92.111:40766.service - OpenSSH per-connection server daemon (20.161.92.111:40766). Jan 15 23:50:47.290100 sshd[5791]: Accepted publickey for core from 20.161.92.111 port 40766 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:47.293019 sshd-session[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:47.305875 systemd-logind[1976]: New session 17 of user core. Jan 15 23:50:47.316037 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 15 23:50:48.817990 sshd[5794]: Connection closed by 20.161.92.111 port 40766 Jan 15 23:50:48.818784 sshd-session[5791]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:48.831853 systemd-logind[1976]: Session 17 logged out. Waiting for processes to exit. Jan 15 23:50:48.832761 systemd[1]: sshd@16-172.31.28.91:22-20.161.92.111:40766.service: Deactivated successfully. Jan 15 23:50:48.844653 systemd[1]: session-17.scope: Deactivated successfully. Jan 15 23:50:48.849557 systemd-logind[1976]: Removed session 17. Jan 15 23:50:48.865264 containerd[1995]: time="2026-01-15T23:50:48.864841643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 23:50:48.918381 systemd[1]: Started sshd@17-172.31.28.91:22-20.161.92.111:40772.service - OpenSSH per-connection server daemon (20.161.92.111:40772). Jan 15 23:50:49.142392 containerd[1995]: time="2026-01-15T23:50:49.141648620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:49.145725 containerd[1995]: time="2026-01-15T23:50:49.143893688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 23:50:49.145725 containerd[1995]: time="2026-01-15T23:50:49.145503260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 15 23:50:49.146486 kubelet[3591]: E0115 23:50:49.146122 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:50:49.146486 kubelet[3591]: E0115 23:50:49.146191 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:50:49.147133 containerd[1995]: time="2026-01-15T23:50:49.146771072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 23:50:49.148942 kubelet[3591]: E0115 23:50:49.148180 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgmf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fdbbb9d69-q7mqz_calico-system(250297aa-f2ed-4da8-b086-a79052c5e783): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:49.151585 kubelet[3591]: E0115 23:50:49.151447 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:50:49.476635 sshd[5813]: Accepted publickey for core from 20.161.92.111 port 40772 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:49.479916 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:49.488044 systemd-logind[1976]: New session 18 of user core. Jan 15 23:50:49.500753 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 15 23:50:49.585834 containerd[1995]: time="2026-01-15T23:50:49.585772282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:49.588159 containerd[1995]: time="2026-01-15T23:50:49.588059530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 23:50:49.589091 containerd[1995]: time="2026-01-15T23:50:49.588208078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 15 23:50:49.589233 kubelet[3591]: E0115 23:50:49.588578 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:50:49.589233 kubelet[3591]: E0115 23:50:49.588667 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:50:49.589233 kubelet[3591]: E0115 23:50:49.589148 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bb2667bde4a84941ae0fd665fe854e1a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnz7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bbb75d98d-f8wxn_calico-system(4b2d9ae2-30d4-43cf-844d-a86d433a646c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:49.592078 containerd[1995]: time="2026-01-15T23:50:49.591984022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 23:50:49.847430 containerd[1995]: time="2026-01-15T23:50:49.847272168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:49.849699 containerd[1995]: time="2026-01-15T23:50:49.849610860Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 23:50:49.849846 containerd[1995]: time="2026-01-15T23:50:49.849758724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 15 23:50:49.850105 kubelet[3591]: E0115 23:50:49.850017 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:50:49.850194 kubelet[3591]: E0115 23:50:49.850135 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:50:49.851864 kubelet[3591]: E0115 23:50:49.851743 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnz7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bbb75d98d-f8wxn_calico-system(4b2d9ae2-30d4-43cf-844d-a86d433a646c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:49.853764 kubelet[3591]: E0115 23:50:49.853672 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:50:50.261740 sshd[5816]: Connection closed by 20.161.92.111 port 40772 Jan 15 23:50:50.262403 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:50.269809 systemd-logind[1976]: Session 18 logged out. Waiting for processes to exit. Jan 15 23:50:50.270517 systemd[1]: sshd@17-172.31.28.91:22-20.161.92.111:40772.service: Deactivated successfully. Jan 15 23:50:50.274967 systemd[1]: session-18.scope: Deactivated successfully. Jan 15 23:50:50.280383 systemd-logind[1976]: Removed session 18. Jan 15 23:50:50.354067 systemd[1]: Started sshd@18-172.31.28.91:22-20.161.92.111:40774.service - OpenSSH per-connection server daemon (20.161.92.111:40774). Jan 15 23:50:50.891936 sshd[5826]: Accepted publickey for core from 20.161.92.111 port 40774 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:50.894365 sshd-session[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:50.902976 systemd-logind[1976]: New session 19 of user core. Jan 15 23:50:50.911729 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 15 23:50:51.380141 sshd[5829]: Connection closed by 20.161.92.111 port 40774 Jan 15 23:50:51.381010 sshd-session[5826]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:51.389499 systemd[1]: sshd@18-172.31.28.91:22-20.161.92.111:40774.service: Deactivated successfully. Jan 15 23:50:51.395059 systemd[1]: session-19.scope: Deactivated successfully. Jan 15 23:50:51.399748 systemd-logind[1976]: Session 19 logged out. Waiting for processes to exit. Jan 15 23:50:51.402396 systemd-logind[1976]: Removed session 19. Jan 15 23:50:54.859142 containerd[1995]: time="2026-01-15T23:50:54.858737393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:50:55.145231 containerd[1995]: time="2026-01-15T23:50:55.144375650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:55.147942 containerd[1995]: time="2026-01-15T23:50:55.147784478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:50:55.147942 containerd[1995]: time="2026-01-15T23:50:55.147818210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:50:55.148216 kubelet[3591]: E0115 23:50:55.148133 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:55.148743 kubelet[3591]: E0115 23:50:55.148225 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:55.148743 kubelet[3591]: E0115 23:50:55.148554 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgdjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c69b78f6b-96q62_calico-apiserver(5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:55.149885 kubelet[3591]: E0115 23:50:55.149814 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:50:56.473598 systemd[1]: Started sshd@19-172.31.28.91:22-20.161.92.111:32778.service - OpenSSH per-connection server daemon (20.161.92.111:32778). Jan 15 23:50:56.999625 sshd[5851]: Accepted publickey for core from 20.161.92.111 port 32778 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:50:57.001959 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:50:57.011424 systemd-logind[1976]: New session 20 of user core. Jan 15 23:50:57.016735 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 15 23:50:57.478290 sshd[5854]: Connection closed by 20.161.92.111 port 32778 Jan 15 23:50:57.479119 sshd-session[5851]: pam_unix(sshd:session): session closed for user core Jan 15 23:50:57.486734 systemd-logind[1976]: Session 20 logged out. Waiting for processes to exit. Jan 15 23:50:57.488222 systemd[1]: sshd@19-172.31.28.91:22-20.161.92.111:32778.service: Deactivated successfully. Jan 15 23:50:57.491479 systemd[1]: session-20.scope: Deactivated successfully. Jan 15 23:50:57.498096 systemd-logind[1976]: Removed session 20. Jan 15 23:50:57.859882 containerd[1995]: time="2026-01-15T23:50:57.859452668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:50:58.160288 containerd[1995]: time="2026-01-15T23:50:58.159819833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:58.162040 containerd[1995]: time="2026-01-15T23:50:58.161925089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:50:58.162271 containerd[1995]: time="2026-01-15T23:50:58.162025337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:50:58.162698 kubelet[3591]: E0115 23:50:58.162618 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:58.164758 kubelet[3591]: E0115 23:50:58.162689 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:50:58.164758 kubelet[3591]: E0115 23:50:58.163052 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w54nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c69b78f6b-zzmq4_calico-apiserver(3a3a871b-481b-4197-950a-9e2f48b0e53a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:58.165755 containerd[1995]: time="2026-01-15T23:50:58.164486069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 23:50:58.165828 kubelet[3591]: E0115 23:50:58.164800 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" podUID="3a3a871b-481b-4197-950a-9e2f48b0e53a" Jan 15 23:50:58.445058 containerd[1995]: time="2026-01-15T23:50:58.444982614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:58.447212 containerd[1995]: time="2026-01-15T23:50:58.447136218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 23:50:58.447550 containerd[1995]: time="2026-01-15T23:50:58.447247422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 15 23:50:58.447867 kubelet[3591]: E0115 23:50:58.447427 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:50:58.447867 kubelet[3591]: E0115 23:50:58.447664 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:50:58.448182 containerd[1995]: time="2026-01-15T23:50:58.448134138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 23:50:58.449591 kubelet[3591]: E0115 23:50:58.448335 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zcqlh_calico-system(7292bea6-012f-4e29-ba2d-73a4ea488a56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:58.450227 kubelet[3591]: E0115 23:50:58.450140 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:50:58.713183 containerd[1995]: time="2026-01-15T23:50:58.712929584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:58.715748 containerd[1995]: time="2026-01-15T23:50:58.715671032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 23:50:58.715877 containerd[1995]: time="2026-01-15T23:50:58.715717820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 15 23:50:58.716290 kubelet[3591]: E0115 23:50:58.716220 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:50:58.716376 kubelet[3591]: E0115 23:50:58.716292 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:50:58.716553 kubelet[3591]: E0115 23:50:58.716450 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:58.719825 containerd[1995]: time="2026-01-15T23:50:58.719672192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 23:50:59.004061 containerd[1995]: time="2026-01-15T23:50:59.003905777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:50:59.006246 containerd[1995]: time="2026-01-15T23:50:59.006073217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 23:50:59.006246 containerd[1995]: time="2026-01-15T23:50:59.006152849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 15 23:50:59.006635 kubelet[3591]: E0115 23:50:59.006588 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:50:59.006790 kubelet[3591]: E0115 23:50:59.006760 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:50:59.007092 kubelet[3591]: E0115 23:50:59.007022 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 23:50:59.009020 kubelet[3591]: E0115 23:50:59.008915 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:51:02.571342 systemd[1]: Started sshd@20-172.31.28.91:22-20.161.92.111:35842.service - OpenSSH per-connection server daemon (20.161.92.111:35842). Jan 15 23:51:02.863603 kubelet[3591]: E0115 23:51:02.861620 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:51:03.091322 sshd[5866]: Accepted publickey for core from 20.161.92.111 port 35842 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:51:03.092380 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:03.100838 systemd-logind[1976]: New session 21 of user core. Jan 15 23:51:03.106726 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 15 23:51:03.580365 sshd[5870]: Connection closed by 20.161.92.111 port 35842 Jan 15 23:51:03.580243 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Jan 15 23:51:03.587319 systemd-logind[1976]: Session 21 logged out. Waiting for processes to exit. Jan 15 23:51:03.588236 systemd[1]: sshd@20-172.31.28.91:22-20.161.92.111:35842.service: Deactivated successfully. Jan 15 23:51:03.593111 systemd[1]: session-21.scope: Deactivated successfully. Jan 15 23:51:03.596296 systemd-logind[1976]: Removed session 21. Jan 15 23:51:04.867415 kubelet[3591]: E0115 23:51:04.867336 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:51:08.679957 systemd[1]: Started sshd@21-172.31.28.91:22-20.161.92.111:35846.service - OpenSSH per-connection server daemon (20.161.92.111:35846). Jan 15 23:51:08.861179 kubelet[3591]: E0115 23:51:08.861111 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:51:09.225886 sshd[5882]: Accepted publickey for core from 20.161.92.111 port 35846 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:51:09.232393 sshd-session[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:09.243980 systemd-logind[1976]: New session 22 of user core. Jan 15 23:51:09.250851 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 15 23:51:09.824365 sshd[5885]: Connection closed by 20.161.92.111 port 35846 Jan 15 23:51:09.825163 sshd-session[5882]: pam_unix(sshd:session): session closed for user core Jan 15 23:51:09.834716 systemd[1]: sshd@21-172.31.28.91:22-20.161.92.111:35846.service: Deactivated successfully. Jan 15 23:51:09.841800 systemd[1]: session-22.scope: Deactivated successfully. Jan 15 23:51:09.844545 systemd-logind[1976]: Session 22 logged out. Waiting for processes to exit. Jan 15 23:51:09.850535 systemd-logind[1976]: Removed session 22. Jan 15 23:51:09.859394 kubelet[3591]: E0115 23:51:09.859324 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:51:11.858131 kubelet[3591]: E0115 23:51:11.858064 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" podUID="3a3a871b-481b-4197-950a-9e2f48b0e53a" Jan 15 23:51:11.859935 kubelet[3591]: E0115 23:51:11.859853 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:51:14.923980 systemd[1]: Started sshd@22-172.31.28.91:22-20.161.92.111:33018.service - OpenSSH per-connection server daemon (20.161.92.111:33018). Jan 15 23:51:15.480914 sshd[5924]: Accepted publickey for core from 20.161.92.111 port 33018 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:51:15.483972 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:15.504376 systemd-logind[1976]: New session 23 of user core. Jan 15 23:51:15.512775 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 15 23:51:15.862935 kubelet[3591]: E0115 23:51:15.862785 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:51:16.094645 sshd[5927]: Connection closed by 20.161.92.111 port 33018 Jan 15 23:51:16.096750 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Jan 15 23:51:16.105573 systemd-logind[1976]: Session 23 logged out. Waiting for processes to exit. Jan 15 23:51:16.105934 systemd[1]: sshd@22-172.31.28.91:22-20.161.92.111:33018.service: Deactivated successfully. Jan 15 23:51:16.112913 systemd[1]: session-23.scope: Deactivated successfully. Jan 15 23:51:16.118213 systemd-logind[1976]: Removed session 23. Jan 15 23:51:16.858505 kubelet[3591]: E0115 23:51:16.858272 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:51:21.193415 systemd[1]: Started sshd@23-172.31.28.91:22-20.161.92.111:33028.service - OpenSSH per-connection server daemon (20.161.92.111:33028). Jan 15 23:51:21.764020 sshd[5942]: Accepted publickey for core from 20.161.92.111 port 33028 ssh2: RSA SHA256:1Btj3FQgoJ9ARhcN8blmbw/i+aoCNX1+lby9c8KZpWE Jan 15 23:51:21.768053 sshd-session[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 15 23:51:21.778123 systemd-logind[1976]: New session 24 of user core. Jan 15 23:51:21.787028 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 15 23:51:22.338072 sshd[5945]: Connection closed by 20.161.92.111 port 33028 Jan 15 23:51:22.338615 sshd-session[5942]: pam_unix(sshd:session): session closed for user core Jan 15 23:51:22.350129 systemd-logind[1976]: Session 24 logged out. Waiting for processes to exit. Jan 15 23:51:22.351119 systemd[1]: sshd@23-172.31.28.91:22-20.161.92.111:33028.service: Deactivated successfully. Jan 15 23:51:22.359771 systemd[1]: session-24.scope: Deactivated successfully. Jan 15 23:51:22.367080 systemd-logind[1976]: Removed session 24. Jan 15 23:51:22.860830 kubelet[3591]: E0115 23:51:22.860479 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:51:23.877973 kubelet[3591]: E0115 23:51:23.877695 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:51:24.858371 kubelet[3591]: E0115 23:51:24.858178 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:51:24.861545 kubelet[3591]: E0115 23:51:24.860861 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" podUID="3a3a871b-481b-4197-950a-9e2f48b0e53a" Jan 15 23:51:28.858073 kubelet[3591]: E0115 23:51:28.857915 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:51:29.856651 containerd[1995]: time="2026-01-15T23:51:29.856286642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 15 23:51:30.152904 containerd[1995]: time="2026-01-15T23:51:30.152190192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:30.156070 containerd[1995]: time="2026-01-15T23:51:30.155878452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 15 23:51:30.156070 containerd[1995]: time="2026-01-15T23:51:30.156015672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 15 23:51:30.156418 kubelet[3591]: E0115 23:51:30.156312 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:51:30.157080 kubelet[3591]: E0115 23:51:30.156418 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 15 23:51:30.157080 kubelet[3591]: E0115 23:51:30.156714 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lgmf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fdbbb9d69-q7mqz_calico-system(250297aa-f2ed-4da8-b086-a79052c5e783): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:30.158090 kubelet[3591]: E0115 23:51:30.158023 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:51:34.857367 kubelet[3591]: E0115 23:51:34.857163 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:51:35.857251 kubelet[3591]: E0115 23:51:35.857192 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-zzmq4" podUID="3a3a871b-481b-4197-950a-9e2f48b0e53a" Jan 15 23:51:35.857613 containerd[1995]: time="2026-01-15T23:51:35.857104004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 15 23:51:36.022213 systemd[1]: cri-containerd-0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d.scope: Deactivated successfully. Jan 15 23:51:36.023283 systemd[1]: cri-containerd-0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d.scope: Consumed 4.323s CPU time, 58.7M memory peak, 64K read from disk. Jan 15 23:51:36.030298 containerd[1995]: time="2026-01-15T23:51:36.030228749Z" level=info msg="received container exit event container_id:\"0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d\" id:\"0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d\" pid:3132 exit_status:1 exited_at:{seconds:1768521096 nanos:29806661}" Jan 15 23:51:36.052808 systemd[1]: cri-containerd-bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726.scope: Deactivated successfully. Jan 15 23:51:36.055148 systemd[1]: cri-containerd-bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726.scope: Consumed 25.266s CPU time, 110.3M memory peak. Jan 15 23:51:36.060186 containerd[1995]: time="2026-01-15T23:51:36.060117341Z" level=info msg="received container exit event container_id:\"bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726\" id:\"bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726\" pid:3912 exit_status:1 exited_at:{seconds:1768521096 nanos:59497589}" Jan 15 23:51:36.100993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d-rootfs.mount: Deactivated successfully. Jan 15 23:51:36.127105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726-rootfs.mount: Deactivated successfully. Jan 15 23:51:36.159484 containerd[1995]: time="2026-01-15T23:51:36.159395610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:36.161644 containerd[1995]: time="2026-01-15T23:51:36.161570274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 15 23:51:36.161756 containerd[1995]: time="2026-01-15T23:51:36.161696646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 15 23:51:36.162368 kubelet[3591]: E0115 23:51:36.162023 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:36.162368 kubelet[3591]: E0115 23:51:36.162088 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 15 23:51:36.162368 kubelet[3591]: E0115 23:51:36.162256 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgdjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c69b78f6b-96q62_calico-apiserver(5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:36.163609 kubelet[3591]: E0115 23:51:36.163538 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:51:36.562033 kubelet[3591]: I0115 23:51:36.561887 3591 scope.go:117] "RemoveContainer" containerID="bd0dbfc289b962d893d9d8e8ed3e3d85df85521742556d9a16d05f2e227ee726" Jan 15 23:51:36.567248 containerd[1995]: time="2026-01-15T23:51:36.567182132Z" level=info msg="CreateContainer within sandbox \"3543868b91a16ff9483260f2a5d5839292393ceba2bc2b3afb6c6d8713d99134\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 15 23:51:36.569738 kubelet[3591]: I0115 23:51:36.569690 3591 scope.go:117] "RemoveContainer" containerID="0f53314c0f07d08df004c4b0ad2718ce94ba9c9c6bcd74448eae5a914163408d" Jan 15 23:51:36.574579 containerd[1995]: time="2026-01-15T23:51:36.574442840Z" level=info msg="CreateContainer within sandbox \"5e597a1a87f1a7425671cb76e8d08cf68072d7dbe3a7aee8a47f527f52867434\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 15 23:51:36.597498 containerd[1995]: time="2026-01-15T23:51:36.594534356Z" level=info msg="Container 8dd65de8614f66fdf50b4a407751420dab4b24c307d30cdb561447adeb2a39e1: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:51:36.614366 containerd[1995]: time="2026-01-15T23:51:36.614294192Z" level=info msg="CreateContainer within sandbox \"3543868b91a16ff9483260f2a5d5839292393ceba2bc2b3afb6c6d8713d99134\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8dd65de8614f66fdf50b4a407751420dab4b24c307d30cdb561447adeb2a39e1\"" Jan 15 23:51:36.615187 containerd[1995]: time="2026-01-15T23:51:36.615008936Z" level=info msg="Container 6a6f4a4410340987ba18f39444cf79c24294c9474f2c29b1f593951e3f49f210: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:51:36.616886 containerd[1995]: time="2026-01-15T23:51:36.615811688Z" level=info msg="StartContainer for \"8dd65de8614f66fdf50b4a407751420dab4b24c307d30cdb561447adeb2a39e1\"" Jan 15 23:51:36.618813 containerd[1995]: time="2026-01-15T23:51:36.618740000Z" level=info msg="connecting to shim 8dd65de8614f66fdf50b4a407751420dab4b24c307d30cdb561447adeb2a39e1" address="unix:///run/containerd/s/d95382073476066c472c074e23611e5062c1960b6f50f0ad382f6af9c337fd60" protocol=ttrpc version=3 Jan 15 23:51:36.639091 containerd[1995]: time="2026-01-15T23:51:36.639032288Z" level=info msg="CreateContainer within sandbox \"5e597a1a87f1a7425671cb76e8d08cf68072d7dbe3a7aee8a47f527f52867434\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6a6f4a4410340987ba18f39444cf79c24294c9474f2c29b1f593951e3f49f210\"" Jan 15 23:51:36.642492 containerd[1995]: time="2026-01-15T23:51:36.641664320Z" level=info msg="StartContainer for \"6a6f4a4410340987ba18f39444cf79c24294c9474f2c29b1f593951e3f49f210\"" Jan 15 23:51:36.644727 containerd[1995]: time="2026-01-15T23:51:36.644674976Z" level=info msg="connecting to shim 6a6f4a4410340987ba18f39444cf79c24294c9474f2c29b1f593951e3f49f210" address="unix:///run/containerd/s/ceac1c5dbf6a58743a791ec89a97bf4c3eb6c9c941b96233e06dc41b1dc079aa" protocol=ttrpc version=3 Jan 15 23:51:36.669041 systemd[1]: Started cri-containerd-8dd65de8614f66fdf50b4a407751420dab4b24c307d30cdb561447adeb2a39e1.scope - libcontainer container 8dd65de8614f66fdf50b4a407751420dab4b24c307d30cdb561447adeb2a39e1. Jan 15 23:51:36.693775 systemd[1]: Started cri-containerd-6a6f4a4410340987ba18f39444cf79c24294c9474f2c29b1f593951e3f49f210.scope - libcontainer container 6a6f4a4410340987ba18f39444cf79c24294c9474f2c29b1f593951e3f49f210. Jan 15 23:51:36.763419 containerd[1995]: time="2026-01-15T23:51:36.763356249Z" level=info msg="StartContainer for \"8dd65de8614f66fdf50b4a407751420dab4b24c307d30cdb561447adeb2a39e1\" returns successfully" Jan 15 23:51:36.813169 containerd[1995]: time="2026-01-15T23:51:36.812972325Z" level=info msg="StartContainer for \"6a6f4a4410340987ba18f39444cf79c24294c9474f2c29b1f593951e3f49f210\" returns successfully" Jan 15 23:51:38.858148 containerd[1995]: time="2026-01-15T23:51:38.858090419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 15 23:51:39.119960 containerd[1995]: time="2026-01-15T23:51:39.119628260Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:39.122028 containerd[1995]: time="2026-01-15T23:51:39.121838661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 15 23:51:39.122028 containerd[1995]: time="2026-01-15T23:51:39.121978881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 15 23:51:39.122493 kubelet[3591]: E0115 23:51:39.122416 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:51:39.123644 kubelet[3591]: E0115 23:51:39.123034 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 15 23:51:39.123644 kubelet[3591]: E0115 23:51:39.123441 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwng9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zcqlh_calico-system(7292bea6-012f-4e29-ba2d-73a4ea488a56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:39.125147 kubelet[3591]: E0115 23:51:39.125056 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zcqlh" podUID="7292bea6-012f-4e29-ba2d-73a4ea488a56" Jan 15 23:51:42.343196 systemd[1]: cri-containerd-22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77.scope: Deactivated successfully. Jan 15 23:51:42.344963 systemd[1]: cri-containerd-22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77.scope: Consumed 4.336s CPU time, 22.2M memory peak, 196K read from disk. Jan 15 23:51:42.348437 containerd[1995]: time="2026-01-15T23:51:42.347503465Z" level=info msg="received container exit event container_id:\"22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77\" id:\"22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77\" pid:3163 exit_status:1 exited_at:{seconds:1768521102 nanos:346902721}" Jan 15 23:51:42.391141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77-rootfs.mount: Deactivated successfully. Jan 15 23:51:42.600217 kubelet[3591]: I0115 23:51:42.599792 3591 scope.go:117] "RemoveContainer" containerID="22297b5d1d31bea142752d47c0c1717fb742822ce638407ba800e266bab27b77" Jan 15 23:51:42.604635 containerd[1995]: time="2026-01-15T23:51:42.604579814Z" level=info msg="CreateContainer within sandbox \"f48103fc8f9c7c4749ea7692646dd4ce70d36d09b114f5095ad7de9d0763101e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 15 23:51:42.626794 containerd[1995]: time="2026-01-15T23:51:42.625798622Z" level=info msg="Container c399c862bc842568f785ee374751b0834b910999c02b250b2279dd2b7db183ce: CDI devices from CRI Config.CDIDevices: []" Jan 15 23:51:42.633399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount865902742.mount: Deactivated successfully. Jan 15 23:51:42.648882 containerd[1995]: time="2026-01-15T23:51:42.648821630Z" level=info msg="CreateContainer within sandbox \"f48103fc8f9c7c4749ea7692646dd4ce70d36d09b114f5095ad7de9d0763101e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c399c862bc842568f785ee374751b0834b910999c02b250b2279dd2b7db183ce\"" Jan 15 23:51:42.650024 containerd[1995]: time="2026-01-15T23:51:42.649943150Z" level=info msg="StartContainer for \"c399c862bc842568f785ee374751b0834b910999c02b250b2279dd2b7db183ce\"" Jan 15 23:51:42.652850 containerd[1995]: time="2026-01-15T23:51:42.652789382Z" level=info msg="connecting to shim c399c862bc842568f785ee374751b0834b910999c02b250b2279dd2b7db183ce" address="unix:///run/containerd/s/4112be832d09547c72c5dfb038e5a590b4d75674dcc835c7534d7438d7a0f3ba" protocol=ttrpc version=3 Jan 15 23:51:42.695789 systemd[1]: Started cri-containerd-c399c862bc842568f785ee374751b0834b910999c02b250b2279dd2b7db183ce.scope - libcontainer container c399c862bc842568f785ee374751b0834b910999c02b250b2279dd2b7db183ce. Jan 15 23:51:42.775422 containerd[1995]: time="2026-01-15T23:51:42.775274739Z" level=info msg="StartContainer for \"c399c862bc842568f785ee374751b0834b910999c02b250b2279dd2b7db183ce\" returns successfully" Jan 15 23:51:42.858559 containerd[1995]: time="2026-01-15T23:51:42.857992623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 15 23:51:43.145712 containerd[1995]: time="2026-01-15T23:51:43.145582692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:43.148148 containerd[1995]: time="2026-01-15T23:51:43.148043413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 15 23:51:43.148419 containerd[1995]: time="2026-01-15T23:51:43.148318141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 15 23:51:43.148891 kubelet[3591]: E0115 23:51:43.148755 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:51:43.148891 kubelet[3591]: E0115 23:51:43.148840 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 15 23:51:43.149167 kubelet[3591]: E0115 23:51:43.149088 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:bb2667bde4a84941ae0fd665fe854e1a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnz7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bbb75d98d-f8wxn_calico-system(4b2d9ae2-30d4-43cf-844d-a86d433a646c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:43.151369 containerd[1995]: time="2026-01-15T23:51:43.151273621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 15 23:51:43.427014 containerd[1995]: time="2026-01-15T23:51:43.426742946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:43.429151 containerd[1995]: time="2026-01-15T23:51:43.428987834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 15 23:51:43.429151 containerd[1995]: time="2026-01-15T23:51:43.429111362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 15 23:51:43.429817 kubelet[3591]: E0115 23:51:43.429508 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:51:43.429817 kubelet[3591]: E0115 23:51:43.429574 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 15 23:51:43.429817 kubelet[3591]: E0115 23:51:43.429719 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fnz7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6bbb75d98d-f8wxn_calico-system(4b2d9ae2-30d4-43cf-844d-a86d433a646c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:43.431012 kubelet[3591]: E0115 23:51:43.430942 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6bbb75d98d-f8wxn" podUID="4b2d9ae2-30d4-43cf-844d-a86d433a646c" Jan 15 23:51:43.857528 kubelet[3591]: E0115 23:51:43.857450 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fdbbb9d69-q7mqz" podUID="250297aa-f2ed-4da8-b086-a79052c5e783" Jan 15 23:51:45.724654 kubelet[3591]: E0115 23:51:45.724579 3591 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-91?timeout=10s\": context deadline exceeded" Jan 15 23:51:45.857737 containerd[1995]: time="2026-01-15T23:51:45.857680110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 15 23:51:46.117425 containerd[1995]: time="2026-01-15T23:51:46.117281331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:46.119820 containerd[1995]: time="2026-01-15T23:51:46.119543283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 15 23:51:46.119820 containerd[1995]: time="2026-01-15T23:51:46.119672031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 15 23:51:46.120003 kubelet[3591]: E0115 23:51:46.119868 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:51:46.120003 kubelet[3591]: E0115 23:51:46.119927 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 15 23:51:46.120194 kubelet[3591]: E0115 23:51:46.120103 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:46.123203 containerd[1995]: time="2026-01-15T23:51:46.123081639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 15 23:51:46.407624 containerd[1995]: time="2026-01-15T23:51:46.407447597Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 15 23:51:46.410196 containerd[1995]: time="2026-01-15T23:51:46.410125025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 15 23:51:46.410314 containerd[1995]: time="2026-01-15T23:51:46.410256077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 15 23:51:46.410579 kubelet[3591]: E0115 23:51:46.410519 3591 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:51:46.410680 kubelet[3591]: E0115 23:51:46.410590 3591 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 15 23:51:46.411489 kubelet[3591]: E0115 23:51:46.410743 3591 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvgns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hscnf_calico-system(9fb7073f-5e73-4607-9430-af7f999d9c94): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 15 23:51:46.412090 kubelet[3591]: E0115 23:51:46.412024 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hscnf" podUID="9fb7073f-5e73-4607-9430-af7f999d9c94" Jan 15 23:51:46.857092 kubelet[3591]: E0115 23:51:46.857006 3591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c69b78f6b-96q62" podUID="5b0fb1bc-c77b-46e5-94d7-ad2de2073aa0" Jan 15 23:51:47.856908 containerd[1995]: time="2026-01-15T23:51:47.856625204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\""