Dec 16 12:26:02.134937 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 16 12:26:02.134980 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:26:02.135004 kernel: KASLR disabled due to lack of seed Dec 16 12:26:02.137153 kernel: efi: EFI v2.7 by EDK II Dec 16 12:26:02.137182 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Dec 16 12:26:02.137215 kernel: secureboot: Secure boot disabled Dec 16 12:26:02.137236 kernel: ACPI: Early table checksum verification disabled Dec 16 12:26:02.137251 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 16 12:26:02.137269 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 12:26:02.137284 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 12:26:02.137300 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 12:26:02.137327 kernel: ACPI: FACS 0x0000000078630000 000040 Dec 16 12:26:02.137342 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 12:26:02.137357 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 16 12:26:02.137375 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 16 12:26:02.137391 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 16 12:26:02.137411 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 12:26:02.137428 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 16 12:26:02.137444 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 16 12:26:02.137460 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 16 12:26:02.137475 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 16 12:26:02.137491 kernel: printk: legacy bootconsole [uart0] enabled Dec 16 12:26:02.137507 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:26:02.137524 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 12:26:02.137540 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Dec 16 12:26:02.137556 kernel: Zone ranges: Dec 16 12:26:02.137572 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 16 12:26:02.137592 kernel: DMA32 empty Dec 16 12:26:02.137608 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 16 12:26:02.137624 kernel: Device empty Dec 16 12:26:02.137639 kernel: Movable zone start for each node Dec 16 12:26:02.137655 kernel: Early memory node ranges Dec 16 12:26:02.137671 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 16 12:26:02.137687 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 16 12:26:02.137703 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 16 12:26:02.137719 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 16 12:26:02.137735 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 16 12:26:02.137751 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 16 12:26:02.137767 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 16 12:26:02.137787 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 16 12:26:02.137810 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 12:26:02.137828 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 16 12:26:02.137845 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Dec 16 12:26:02.137862 kernel: psci: probing for conduit method from ACPI. Dec 16 12:26:02.137884 kernel: psci: PSCIv1.0 detected in firmware. Dec 16 12:26:02.137900 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:26:02.137917 kernel: psci: Trusted OS migration not required Dec 16 12:26:02.137935 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:26:02.137952 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Dec 16 12:26:02.137969 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:26:02.137986 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:26:02.138004 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:26:02.138136 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:26:02.138157 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:26:02.138175 kernel: CPU features: detected: Spectre-v2 Dec 16 12:26:02.138200 kernel: CPU features: detected: Spectre-v3a Dec 16 12:26:02.138217 kernel: CPU features: detected: Spectre-BHB Dec 16 12:26:02.138235 kernel: CPU features: detected: ARM erratum 1742098 Dec 16 12:26:02.138252 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 16 12:26:02.138270 kernel: alternatives: applying boot alternatives Dec 16 12:26:02.138289 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:26:02.138308 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:26:02.138325 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:26:02.138342 kernel: Fallback order for Node 0: 0 Dec 16 12:26:02.138359 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Dec 16 12:26:02.138377 kernel: Policy zone: Normal Dec 16 12:26:02.138399 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:26:02.138416 kernel: software IO TLB: area num 2. Dec 16 12:26:02.138434 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Dec 16 12:26:02.138451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:26:02.138467 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:26:02.138485 kernel: rcu: RCU event tracing is enabled. Dec 16 12:26:02.138503 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:26:02.138521 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:26:02.138538 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:26:02.138555 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:26:02.138572 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:26:02.138593 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:26:02.138610 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:26:02.138627 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:26:02.138644 kernel: GICv3: 96 SPIs implemented Dec 16 12:26:02.138661 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:26:02.138678 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:26:02.138694 kernel: GICv3: GICv3 features: 16 PPIs Dec 16 12:26:02.138711 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:26:02.138728 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 16 12:26:02.138745 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 16 12:26:02.138762 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:26:02.138779 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:26:02.138800 kernel: GICv3: using LPI property table @0x0000000400110000 Dec 16 12:26:02.138817 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 16 12:26:02.138834 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Dec 16 12:26:02.138851 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:26:02.138868 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 16 12:26:02.138885 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 16 12:26:02.138902 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 16 12:26:02.138919 kernel: Console: colour dummy device 80x25 Dec 16 12:26:02.138937 kernel: printk: legacy console [tty1] enabled Dec 16 12:26:02.138954 kernel: ACPI: Core revision 20240827 Dec 16 12:26:02.138971 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 16 12:26:02.138993 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:26:02.139010 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:26:02.139054 kernel: landlock: Up and running. Dec 16 12:26:02.139073 kernel: SELinux: Initializing. Dec 16 12:26:02.139091 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:26:02.139108 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:26:02.139126 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:26:02.139143 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:26:02.139166 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:26:02.139183 kernel: Remapping and enabling EFI services. Dec 16 12:26:02.139200 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:26:02.139218 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:26:02.139235 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 16 12:26:02.139252 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Dec 16 12:26:02.139269 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 16 12:26:02.139286 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:26:02.139303 kernel: SMP: Total of 2 processors activated. Dec 16 12:26:02.139325 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:26:02.139352 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:26:02.139370 kernel: CPU features: detected: 32-bit EL1 Support Dec 16 12:26:02.139392 kernel: CPU features: detected: CRC32 instructions Dec 16 12:26:02.139410 kernel: alternatives: applying system-wide alternatives Dec 16 12:26:02.139428 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Dec 16 12:26:02.139447 kernel: devtmpfs: initialized Dec 16 12:26:02.139465 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:26:02.139488 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:26:02.139507 kernel: 16880 pages in range for non-PLT usage Dec 16 12:26:02.139526 kernel: 508400 pages in range for PLT usage Dec 16 12:26:02.139544 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:26:02.139562 kernel: SMBIOS 3.0.0 present. Dec 16 12:26:02.139580 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 16 12:26:02.139598 kernel: DMI: Memory slots populated: 0/0 Dec 16 12:26:02.139616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:26:02.139634 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:26:02.139657 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:26:02.139675 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:26:02.139694 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:26:02.139712 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Dec 16 12:26:02.139730 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:26:02.139748 kernel: cpuidle: using governor menu Dec 16 12:26:02.139766 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:26:02.139784 kernel: ASID allocator initialised with 65536 entries Dec 16 12:26:02.139802 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:26:02.139824 kernel: Serial: AMBA PL011 UART driver Dec 16 12:26:02.139842 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:26:02.139860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:26:02.139878 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:26:02.139896 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:26:02.139913 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:26:02.139931 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:26:02.139949 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:26:02.139967 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:26:02.139989 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:26:02.140007 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:26:02.142518 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:26:02.142592 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:26:02.142611 kernel: ACPI: Interpreter enabled Dec 16 12:26:02.142629 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:26:02.142647 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:26:02.142666 kernel: ACPI: CPU0 has been hot-added Dec 16 12:26:02.142684 kernel: ACPI: CPU1 has been hot-added Dec 16 12:26:02.142710 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Dec 16 12:26:02.143038 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:26:02.143255 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:26:02.143453 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:26:02.143647 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Dec 16 12:26:02.143842 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Dec 16 12:26:02.143866 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 16 12:26:02.143894 kernel: acpiphp: Slot [1] registered Dec 16 12:26:02.143913 kernel: acpiphp: Slot [2] registered Dec 16 12:26:02.143932 kernel: acpiphp: Slot [3] registered Dec 16 12:26:02.143949 kernel: acpiphp: Slot [4] registered Dec 16 12:26:02.143967 kernel: acpiphp: Slot [5] registered Dec 16 12:26:02.143985 kernel: acpiphp: Slot [6] registered Dec 16 12:26:02.144004 kernel: acpiphp: Slot [7] registered Dec 16 12:26:02.148750 kernel: acpiphp: Slot [8] registered Dec 16 12:26:02.148781 kernel: acpiphp: Slot [9] registered Dec 16 12:26:02.148799 kernel: acpiphp: Slot [10] registered Dec 16 12:26:02.148828 kernel: acpiphp: Slot [11] registered Dec 16 12:26:02.148847 kernel: acpiphp: Slot [12] registered Dec 16 12:26:02.148865 kernel: acpiphp: Slot [13] registered Dec 16 12:26:02.148882 kernel: acpiphp: Slot [14] registered Dec 16 12:26:02.148900 kernel: acpiphp: Slot [15] registered Dec 16 12:26:02.148918 kernel: acpiphp: Slot [16] registered Dec 16 12:26:02.148935 kernel: acpiphp: Slot [17] registered Dec 16 12:26:02.148953 kernel: acpiphp: Slot [18] registered Dec 16 12:26:02.148970 kernel: acpiphp: Slot [19] registered Dec 16 12:26:02.148992 kernel: acpiphp: Slot [20] registered Dec 16 12:26:02.149010 kernel: acpiphp: Slot [21] registered Dec 16 12:26:02.149162 kernel: acpiphp: Slot [22] registered Dec 16 12:26:02.149182 kernel: acpiphp: Slot [23] registered Dec 16 12:26:02.149217 kernel: acpiphp: Slot [24] registered Dec 16 12:26:02.149238 kernel: acpiphp: Slot [25] registered Dec 16 12:26:02.149256 kernel: acpiphp: Slot [26] registered Dec 16 12:26:02.149273 kernel: acpiphp: Slot [27] registered Dec 16 12:26:02.149291 kernel: acpiphp: Slot [28] registered Dec 16 12:26:02.149309 kernel: acpiphp: Slot [29] registered Dec 16 12:26:02.149333 kernel: acpiphp: Slot [30] registered Dec 16 12:26:02.149351 kernel: acpiphp: Slot [31] registered Dec 16 12:26:02.149368 kernel: PCI host bridge to bus 0000:00 Dec 16 12:26:02.149608 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 16 12:26:02.149781 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:26:02.149949 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 16 12:26:02.150146 kernel: pci_bus 0000:00: root bus resource [bus 00] Dec 16 12:26:02.150371 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:26:02.150591 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Dec 16 12:26:02.150785 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Dec 16 12:26:02.150999 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Dec 16 12:26:02.153381 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Dec 16 12:26:02.153589 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 12:26:02.153811 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Dec 16 12:26:02.154006 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Dec 16 12:26:02.154235 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Dec 16 12:26:02.154425 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Dec 16 12:26:02.154613 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 12:26:02.154788 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 16 12:26:02.154956 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:26:02.157191 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 16 12:26:02.157240 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:26:02.157260 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:26:02.157278 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:26:02.157297 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:26:02.157315 kernel: iommu: Default domain type: Translated Dec 16 12:26:02.157333 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:26:02.157351 kernel: efivars: Registered efivars operations Dec 16 12:26:02.157368 kernel: vgaarb: loaded Dec 16 12:26:02.157395 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:26:02.157413 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:26:02.157431 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:26:02.157449 kernel: pnp: PnP ACPI init Dec 16 12:26:02.157663 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 16 12:26:02.157690 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:26:02.157708 kernel: NET: Registered PF_INET protocol family Dec 16 12:26:02.157727 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:26:02.157750 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:26:02.157768 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:26:02.157786 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:26:02.157804 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:26:02.157822 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:26:02.157840 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:26:02.157858 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:26:02.157876 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:26:02.157894 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:26:02.157916 kernel: kvm [1]: HYP mode not available Dec 16 12:26:02.157934 kernel: Initialise system trusted keyrings Dec 16 12:26:02.157951 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:26:02.157969 kernel: Key type asymmetric registered Dec 16 12:26:02.157987 kernel: Asymmetric key parser 'x509' registered Dec 16 12:26:02.158005 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:26:02.158043 kernel: io scheduler mq-deadline registered Dec 16 12:26:02.158064 kernel: io scheduler kyber registered Dec 16 12:26:02.158082 kernel: io scheduler bfq registered Dec 16 12:26:02.158296 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 16 12:26:02.158323 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:26:02.158341 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:26:02.158359 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 16 12:26:02.158377 kernel: ACPI: button: Sleep Button [SLPB] Dec 16 12:26:02.158395 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:26:02.158414 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 16 12:26:02.158607 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 16 12:26:02.158637 kernel: printk: legacy console [ttyS0] disabled Dec 16 12:26:02.158655 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 16 12:26:02.158673 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:26:02.158691 kernel: printk: legacy bootconsole [uart0] disabled Dec 16 12:26:02.158709 kernel: thunder_xcv, ver 1.0 Dec 16 12:26:02.158726 kernel: thunder_bgx, ver 1.0 Dec 16 12:26:02.158744 kernel: nicpf, ver 1.0 Dec 16 12:26:02.158762 kernel: nicvf, ver 1.0 Dec 16 12:26:02.158954 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:26:02.160822 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:26:01 UTC (1765887961) Dec 16 12:26:02.160859 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:26:02.160878 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Dec 16 12:26:02.160897 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:26:02.160915 kernel: watchdog: NMI not fully supported Dec 16 12:26:02.160933 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:26:02.160951 kernel: Segment Routing with IPv6 Dec 16 12:26:02.160968 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:26:02.160986 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:26:02.161030 kernel: Key type dns_resolver registered Dec 16 12:26:02.161055 kernel: registered taskstats version 1 Dec 16 12:26:02.161073 kernel: Loading compiled-in X.509 certificates Dec 16 12:26:02.161092 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:26:02.161110 kernel: Demotion targets for Node 0: null Dec 16 12:26:02.161127 kernel: Key type .fscrypt registered Dec 16 12:26:02.161145 kernel: Key type fscrypt-provisioning registered Dec 16 12:26:02.161163 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:26:02.161180 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:26:02.161262 kernel: ima: No architecture policies found Dec 16 12:26:02.161284 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:26:02.161302 kernel: clk: Disabling unused clocks Dec 16 12:26:02.161319 kernel: PM: genpd: Disabling unused power domains Dec 16 12:26:02.161337 kernel: Warning: unable to open an initial console. Dec 16 12:26:02.161356 kernel: Freeing unused kernel memory: 39552K Dec 16 12:26:02.161374 kernel: Run /init as init process Dec 16 12:26:02.161391 kernel: with arguments: Dec 16 12:26:02.161409 kernel: /init Dec 16 12:26:02.161433 kernel: with environment: Dec 16 12:26:02.161451 kernel: HOME=/ Dec 16 12:26:02.161470 kernel: TERM=linux Dec 16 12:26:02.161490 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:26:02.161515 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:26:02.161535 systemd[1]: Detected virtualization amazon. Dec 16 12:26:02.161554 systemd[1]: Detected architecture arm64. Dec 16 12:26:02.161577 systemd[1]: Running in initrd. Dec 16 12:26:02.161597 systemd[1]: No hostname configured, using default hostname. Dec 16 12:26:02.161617 systemd[1]: Hostname set to . Dec 16 12:26:02.161636 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:26:02.161654 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:26:02.161673 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:26:02.161693 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:26:02.161713 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:26:02.161737 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:26:02.161757 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:26:02.161777 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:26:02.161798 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:26:02.161818 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:26:02.161837 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:26:02.161857 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:26:02.161881 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:26:02.161900 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:26:02.161920 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:26:02.161939 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:26:02.161958 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:26:02.161978 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:26:02.161998 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:26:02.162042 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:26:02.162066 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:26:02.162093 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:26:02.162112 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:26:02.162131 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:26:02.162150 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:26:02.162170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:26:02.162189 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:26:02.162209 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:26:02.162228 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:26:02.162252 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:26:02.162272 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:26:02.162291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:02.162311 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:26:02.162331 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:26:02.162356 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:26:02.162426 systemd-journald[260]: Collecting audit messages is disabled. Dec 16 12:26:02.162470 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:26:02.162496 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:26:02.162515 kernel: Bridge firewalling registered Dec 16 12:26:02.162534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:26:02.162554 systemd-journald[260]: Journal started Dec 16 12:26:02.162826 systemd-journald[260]: Runtime Journal (/run/log/journal/ec213b03ab6859f540efd5bbab58adc4) is 8M, max 75.3M, 67.3M free. Dec 16 12:26:02.095415 systemd-modules-load[261]: Inserted module 'overlay' Dec 16 12:26:02.139580 systemd-modules-load[261]: Inserted module 'br_netfilter' Dec 16 12:26:02.177208 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:26:02.182305 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:26:02.184536 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:02.194394 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:26:02.206272 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:26:02.218090 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:26:02.225521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:26:02.257091 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:26:02.260000 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:26:02.274871 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:26:02.283360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:26:02.287988 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:26:02.300090 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:26:02.316823 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:26:02.354811 dracut-cmdline[303]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:26:02.398414 systemd-resolved[298]: Positive Trust Anchors: Dec 16 12:26:02.398442 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:26:02.398503 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:26:02.548058 kernel: SCSI subsystem initialized Dec 16 12:26:02.556058 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:26:02.569056 kernel: iscsi: registered transport (tcp) Dec 16 12:26:02.591307 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:26:02.591391 kernel: QLogic iSCSI HBA Driver Dec 16 12:26:02.626218 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:26:02.661134 kernel: random: crng init done Dec 16 12:26:02.661433 systemd-resolved[298]: Defaulting to hostname 'linux'. Dec 16 12:26:02.665406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:26:02.669567 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:26:02.679759 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:26:02.689001 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:26:02.771092 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:26:02.774258 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:26:02.863063 kernel: raid6: neonx8 gen() 6509 MB/s Dec 16 12:26:02.880074 kernel: raid6: neonx4 gen() 6481 MB/s Dec 16 12:26:02.898064 kernel: raid6: neonx2 gen() 5403 MB/s Dec 16 12:26:02.915071 kernel: raid6: neonx1 gen() 3938 MB/s Dec 16 12:26:02.932061 kernel: raid6: int64x8 gen() 3631 MB/s Dec 16 12:26:02.949065 kernel: raid6: int64x4 gen() 3716 MB/s Dec 16 12:26:02.966062 kernel: raid6: int64x2 gen() 3585 MB/s Dec 16 12:26:02.984175 kernel: raid6: int64x1 gen() 2770 MB/s Dec 16 12:26:02.984235 kernel: raid6: using algorithm neonx8 gen() 6509 MB/s Dec 16 12:26:03.003171 kernel: raid6: .... xor() 4714 MB/s, rmw enabled Dec 16 12:26:03.003239 kernel: raid6: using neon recovery algorithm Dec 16 12:26:03.011064 kernel: xor: measuring software checksum speed Dec 16 12:26:03.011140 kernel: 8regs : 11656 MB/sec Dec 16 12:26:03.013315 kernel: 32regs : 13043 MB/sec Dec 16 12:26:03.014648 kernel: arm64_neon : 8928 MB/sec Dec 16 12:26:03.014682 kernel: xor: using function: 32regs (13043 MB/sec) Dec 16 12:26:03.109074 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:26:03.120564 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:26:03.127148 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:26:03.193687 systemd-udevd[510]: Using default interface naming scheme 'v255'. Dec 16 12:26:03.205962 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:26:03.215055 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:26:03.253097 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation Dec 16 12:26:03.297766 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:26:03.305307 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:26:03.432858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:26:03.445893 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:26:03.619460 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:26:03.619527 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 16 12:26:03.619847 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 16 12:26:03.621470 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 12:26:03.621828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:26:03.628124 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 12:26:03.622127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:03.632780 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 12:26:03.634046 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 12:26:03.634442 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:03.642847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:03.648424 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:26:03.657875 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:1d:ce:30:cd:ef Dec 16 12:26:03.661357 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:26:03.661427 kernel: GPT:9289727 != 33554431 Dec 16 12:26:03.661452 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:26:03.663072 kernel: GPT:9289727 != 33554431 Dec 16 12:26:03.664618 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:26:03.664683 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:03.671585 (udev-worker)[553]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:26:03.705999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:03.724072 kernel: nvme nvme0: using unchecked data buffer Dec 16 12:26:03.812702 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 12:26:03.895963 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 12:26:03.920780 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 16 12:26:03.929311 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 12:26:03.932702 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:26:03.978888 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:26:03.996104 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:26:03.999130 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:26:04.007390 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:26:04.013473 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:26:04.018038 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:26:04.052082 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:04.054421 disk-uuid[688]: Primary Header is updated. Dec 16 12:26:04.054421 disk-uuid[688]: Secondary Entries is updated. Dec 16 12:26:04.054421 disk-uuid[688]: Secondary Header is updated. Dec 16 12:26:04.057061 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:26:04.096047 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:05.106136 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:05.107998 disk-uuid[694]: The operation has completed successfully. Dec 16 12:26:05.307356 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:26:05.307968 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:26:05.398616 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:26:05.443043 sh[954]: Success Dec 16 12:26:05.478096 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:26:05.478173 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:26:05.481233 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:26:05.494083 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:26:05.600162 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:26:05.610271 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:26:05.635108 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:26:05.662051 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (977) Dec 16 12:26:05.666703 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:26:05.666768 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:05.696762 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 12:26:05.696838 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:26:05.698123 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:26:05.700861 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:26:05.705479 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:26:05.709363 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:26:05.710668 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:26:05.740056 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:26:05.781059 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:11) scanned by mount (1000) Dec 16 12:26:05.785648 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:05.785723 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:05.806956 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:26:05.807054 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:26:05.815101 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:05.816835 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:26:05.823240 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:26:05.952813 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:26:05.963319 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:26:06.048713 systemd-networkd[1149]: lo: Link UP Dec 16 12:26:06.050588 systemd-networkd[1149]: lo: Gained carrier Dec 16 12:26:06.055897 systemd-networkd[1149]: Enumeration completed Dec 16 12:26:06.058865 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:06.058874 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:26:06.063927 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:26:06.072596 systemd[1]: Reached target network.target - Network. Dec 16 12:26:06.084649 systemd-networkd[1149]: eth0: Link UP Dec 16 12:26:06.084673 systemd-networkd[1149]: eth0: Gained carrier Dec 16 12:26:06.084698 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:06.106164 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.28.27/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:26:06.158787 ignition[1060]: Ignition 2.22.0 Dec 16 12:26:06.158817 ignition[1060]: Stage: fetch-offline Dec 16 12:26:06.159909 ignition[1060]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:06.159934 ignition[1060]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:06.169966 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:26:06.160341 ignition[1060]: Ignition finished successfully Dec 16 12:26:06.181558 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:26:06.233083 ignition[1161]: Ignition 2.22.0 Dec 16 12:26:06.233115 ignition[1161]: Stage: fetch Dec 16 12:26:06.234388 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:06.234927 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:06.235155 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:06.264092 ignition[1161]: PUT result: OK Dec 16 12:26:06.271250 ignition[1161]: parsed url from cmdline: "" Dec 16 12:26:06.271277 ignition[1161]: no config URL provided Dec 16 12:26:06.271295 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:26:06.271324 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:26:06.271375 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:06.273836 ignition[1161]: PUT result: OK Dec 16 12:26:06.273938 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 12:26:06.278886 ignition[1161]: GET result: OK Dec 16 12:26:06.279143 ignition[1161]: parsing config with SHA512: 8a84bada191c38e2435fe7b8a520f43e0cf3695bec6d34c3e14ec4ce6c48133176837bc84306d95612a2682e1d3f7c698a34c1572a1f69bc0685ee01b4f058c8 Dec 16 12:26:06.298515 unknown[1161]: fetched base config from "system" Dec 16 12:26:06.298539 unknown[1161]: fetched base config from "system" Dec 16 12:26:06.298916 unknown[1161]: fetched user config from "aws" Dec 16 12:26:06.307244 ignition[1161]: fetch: fetch complete Dec 16 12:26:06.307273 ignition[1161]: fetch: fetch passed Dec 16 12:26:06.307411 ignition[1161]: Ignition finished successfully Dec 16 12:26:06.316514 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:26:06.323293 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:26:06.385667 ignition[1167]: Ignition 2.22.0 Dec 16 12:26:06.385701 ignition[1167]: Stage: kargs Dec 16 12:26:06.386455 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:06.386483 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:06.386629 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:06.392599 ignition[1167]: PUT result: OK Dec 16 12:26:06.400789 ignition[1167]: kargs: kargs passed Dec 16 12:26:06.400906 ignition[1167]: Ignition finished successfully Dec 16 12:26:06.405322 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:26:06.412512 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:26:06.476250 ignition[1173]: Ignition 2.22.0 Dec 16 12:26:06.476279 ignition[1173]: Stage: disks Dec 16 12:26:06.478117 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:06.478162 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:06.478355 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:06.487751 ignition[1173]: PUT result: OK Dec 16 12:26:06.492889 ignition[1173]: disks: disks passed Dec 16 12:26:06.493966 ignition[1173]: Ignition finished successfully Dec 16 12:26:06.499517 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:26:06.503570 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:26:06.507206 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:26:06.512264 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:26:06.516988 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:26:06.525508 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:26:06.531930 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:26:06.603217 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:26:06.608437 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:26:06.616587 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:26:06.779054 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:26:06.780900 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:26:06.785701 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:26:06.792121 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:26:06.797744 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:26:06.804750 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:26:06.804872 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:26:06.804932 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:26:06.834502 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:26:06.841343 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:26:06.865053 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:11) scanned by mount (1200) Dec 16 12:26:06.871064 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:06.871146 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:06.881782 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:26:06.881862 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:26:06.885550 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:26:06.958418 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:26:06.970138 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:26:06.980162 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:26:06.991852 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:26:07.179158 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:26:07.184577 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:26:07.195697 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:26:07.227198 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:26:07.230472 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:07.265685 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:26:07.287932 ignition[1313]: INFO : Ignition 2.22.0 Dec 16 12:26:07.290209 ignition[1313]: INFO : Stage: mount Dec 16 12:26:07.291870 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:07.291870 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:07.291870 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:07.300098 ignition[1313]: INFO : PUT result: OK Dec 16 12:26:07.305615 ignition[1313]: INFO : mount: mount passed Dec 16 12:26:07.308073 ignition[1313]: INFO : Ignition finished successfully Dec 16 12:26:07.310570 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:26:07.316744 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:26:07.386238 systemd-networkd[1149]: eth0: Gained IPv6LL Dec 16 12:26:07.783640 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:26:07.826068 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:11) scanned by mount (1324) Dec 16 12:26:07.830979 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:07.831076 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:07.840491 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:26:07.840561 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:26:07.843996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:26:07.892999 ignition[1341]: INFO : Ignition 2.22.0 Dec 16 12:26:07.895252 ignition[1341]: INFO : Stage: files Dec 16 12:26:07.895252 ignition[1341]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:07.895252 ignition[1341]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:07.895252 ignition[1341]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:07.905470 ignition[1341]: INFO : PUT result: OK Dec 16 12:26:07.911671 ignition[1341]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:26:07.914878 ignition[1341]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:26:07.914878 ignition[1341]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:26:07.925920 ignition[1341]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:26:07.929490 ignition[1341]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:26:07.933720 unknown[1341]: wrote ssh authorized keys file for user: core Dec 16 12:26:07.936486 ignition[1341]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:26:07.944158 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:26:07.949544 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:26:08.052862 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:26:08.187555 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:26:08.191855 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:26:08.191855 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:26:08.191855 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:26:08.191855 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:26:08.191855 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:26:08.191855 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:26:08.191855 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:26:08.191855 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:26:08.224760 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:26:08.224760 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:26:08.224760 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:26:08.238493 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:26:08.238493 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:26:08.238493 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 12:26:08.676870 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:26:09.105757 ignition[1341]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:26:09.105757 ignition[1341]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:26:09.113649 ignition[1341]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:26:09.121402 ignition[1341]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:26:09.121402 ignition[1341]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:26:09.121402 ignition[1341]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:26:09.121402 ignition[1341]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:26:09.140494 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:26:09.140494 ignition[1341]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:26:09.140494 ignition[1341]: INFO : files: files passed Dec 16 12:26:09.140494 ignition[1341]: INFO : Ignition finished successfully Dec 16 12:26:09.131946 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:26:09.145002 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:26:09.158285 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:26:09.187392 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:26:09.189570 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:26:09.205071 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:26:09.205071 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:26:09.213786 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:26:09.220529 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:26:09.227838 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:26:09.234530 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:26:09.314529 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:26:09.314987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:26:09.324675 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:26:09.327693 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:26:09.334710 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:26:09.336347 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:26:09.395084 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:26:09.403159 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:26:09.443932 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:26:09.449608 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:26:09.455407 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:26:09.457781 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:26:09.458172 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:26:09.465093 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:26:09.472372 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:26:09.475486 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:26:09.482360 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:26:09.487885 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:26:09.491479 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:26:09.498746 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:26:09.503437 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:26:09.509238 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:26:09.512305 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:26:09.516629 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:26:09.519645 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:26:09.519886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:26:09.526375 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:26:09.530107 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:26:09.533558 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:26:09.539846 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:26:09.543151 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:26:09.543760 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:26:09.547842 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:26:09.548508 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:26:09.556463 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:26:09.556689 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:26:09.563206 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:26:09.575756 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:26:09.579444 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:26:09.584255 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:26:09.596752 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:26:09.597216 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:26:09.600495 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:26:09.600799 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:26:09.619878 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:26:09.623254 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:26:09.655099 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:26:09.666652 ignition[1395]: INFO : Ignition 2.22.0 Dec 16 12:26:09.666652 ignition[1395]: INFO : Stage: umount Dec 16 12:26:09.666350 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:26:09.674961 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:09.674961 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:09.674961 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:09.670340 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:26:09.685630 ignition[1395]: INFO : PUT result: OK Dec 16 12:26:09.690591 ignition[1395]: INFO : umount: umount passed Dec 16 12:26:09.692528 ignition[1395]: INFO : Ignition finished successfully Dec 16 12:26:09.700450 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:26:09.700722 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:26:09.704798 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:26:09.704903 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:26:09.705627 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:26:09.705734 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:26:09.706731 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:26:09.706831 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:26:09.707845 systemd[1]: Stopped target network.target - Network. Dec 16 12:26:09.708532 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:26:09.708630 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:26:09.708951 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:26:09.711750 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:26:09.721360 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:26:09.724427 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:26:09.727216 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:26:09.731972 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:26:09.732214 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:26:09.736361 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:26:09.736447 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:26:09.740148 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:26:09.740268 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:26:09.744126 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:26:09.744218 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:26:09.747412 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:26:09.747511 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:26:09.754778 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:26:09.760542 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:26:09.807797 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:26:09.808075 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:26:09.820374 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:26:09.820817 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:26:09.821035 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:26:09.841602 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:26:09.843752 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:26:09.850033 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:26:09.850304 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:26:09.859157 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:26:09.863784 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:26:09.863912 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:26:09.868544 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:26:09.868666 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:26:09.883647 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:26:09.883761 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:26:09.889447 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:26:09.889576 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:26:09.908880 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:26:09.917786 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:26:09.917942 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:26:09.947583 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:26:09.950422 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:26:09.954582 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:26:09.954677 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:26:09.960316 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:26:09.960395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:26:09.963465 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:26:09.963582 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:26:09.974093 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:26:09.974216 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:26:09.990267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:26:09.990415 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:26:10.001564 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:26:10.009249 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:26:10.009409 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:26:10.019297 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:26:10.019438 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:26:10.025327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:26:10.025445 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:10.038400 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:26:10.041861 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:26:10.045244 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:26:10.047431 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:26:10.057126 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:26:10.068593 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:26:10.068857 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:26:10.079747 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:26:10.086505 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:26:10.121969 systemd[1]: Switching root. Dec 16 12:26:10.160095 systemd-journald[260]: Journal stopped Dec 16 12:26:12.225178 systemd-journald[260]: Received SIGTERM from PID 1 (systemd). Dec 16 12:26:12.225308 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:26:12.225352 kernel: SELinux: policy capability open_perms=1 Dec 16 12:26:12.225390 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:26:12.225418 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:26:12.225446 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:26:12.225475 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:26:12.225503 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:26:12.225532 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:26:12.225560 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:26:12.225589 kernel: audit: type=1403 audit(1765887970.471:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:26:12.225629 systemd[1]: Successfully loaded SELinux policy in 77.178ms. Dec 16 12:26:12.225677 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.445ms. Dec 16 12:26:12.225709 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:26:12.225741 systemd[1]: Detected virtualization amazon. Dec 16 12:26:12.225770 systemd[1]: Detected architecture arm64. Dec 16 12:26:12.225799 systemd[1]: Detected first boot. Dec 16 12:26:12.225829 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:26:12.225860 zram_generator::config[1439]: No configuration found. Dec 16 12:26:12.225891 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:26:12.225921 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:26:12.225953 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:26:12.225993 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:26:12.228128 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:26:12.228195 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:26:12.228229 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:26:12.228259 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:26:12.228289 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:26:12.228328 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:26:12.228364 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:26:12.228395 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:26:12.228426 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:26:12.230066 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:26:12.230136 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:26:12.230169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:26:12.230200 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:26:12.230228 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:26:12.230267 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:26:12.230300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:26:12.230331 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:26:12.230361 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:26:12.230389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:26:12.230419 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:26:12.230448 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:26:12.230479 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:26:12.230511 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:26:12.230539 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:26:12.230568 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:26:12.230598 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:26:12.230628 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:26:12.230658 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:26:12.230688 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:26:12.230726 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:26:12.230754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:26:12.230788 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:26:12.230816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:26:12.230844 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:26:12.230875 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:26:12.230905 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:26:12.230936 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:26:12.230967 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:26:12.230995 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:26:12.231049 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:26:12.231091 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:26:12.231121 systemd[1]: Reached target machines.target - Containers. Dec 16 12:26:12.231153 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:26:12.231183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:26:12.231213 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:26:12.231241 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:26:12.231269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:26:12.231297 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:26:12.231329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:26:12.231356 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:26:12.231384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:26:12.231412 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:26:12.231439 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:26:12.231467 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:26:12.231494 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:26:12.231521 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:26:12.231549 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:26:12.231582 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:26:12.231614 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:26:12.231644 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:26:12.231674 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:26:12.231705 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:26:12.231735 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:26:12.231771 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:26:12.231801 systemd[1]: Stopped verity-setup.service. Dec 16 12:26:12.231829 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:26:12.231861 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:26:12.231898 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:26:12.231929 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:26:12.231958 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:26:12.231987 kernel: fuse: init (API version 7.41) Dec 16 12:26:12.235602 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:26:12.235694 kernel: loop: module loaded Dec 16 12:26:12.235726 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:26:12.235759 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:26:12.235789 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:26:12.235830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:26:12.235860 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:26:12.235888 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:26:12.235919 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:26:12.235948 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:26:12.235977 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:26:12.236095 systemd-journald[1522]: Collecting audit messages is disabled. Dec 16 12:26:12.236161 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:26:12.236194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:26:12.236228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:26:12.236260 systemd-journald[1522]: Journal started Dec 16 12:26:12.236311 systemd-journald[1522]: Runtime Journal (/run/log/journal/ec213b03ab6859f540efd5bbab58adc4) is 8M, max 75.3M, 67.3M free. Dec 16 12:26:11.592237 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:26:11.615177 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 12:26:12.247921 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:26:12.248056 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:26:11.616012 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:26:12.278335 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:26:12.284160 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:26:12.289565 kernel: ACPI: bus type drm_connector registered Dec 16 12:26:12.290938 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:26:12.292672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:26:12.319573 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:26:12.327747 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:26:12.339418 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:26:12.345394 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:26:12.345474 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:26:12.351535 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:26:12.371459 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:26:12.376755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:26:12.383471 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:26:12.391830 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:26:12.397670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:26:12.407182 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:26:12.413697 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:26:12.416711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:26:12.434800 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:26:12.443930 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:26:12.453160 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:26:12.457803 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:26:12.460881 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:26:12.500793 systemd-journald[1522]: Time spent on flushing to /var/log/journal/ec213b03ab6859f540efd5bbab58adc4 is 157.924ms for 925 entries. Dec 16 12:26:12.500793 systemd-journald[1522]: System Journal (/var/log/journal/ec213b03ab6859f540efd5bbab58adc4) is 8M, max 195.6M, 187.6M free. Dec 16 12:26:12.682495 systemd-journald[1522]: Received client request to flush runtime journal. Dec 16 12:26:12.682598 kernel: loop0: detected capacity change from 0 to 211168 Dec 16 12:26:12.536992 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:26:12.540252 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:26:12.548400 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:26:12.610133 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:26:12.629965 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:26:12.645312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:26:12.649643 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:26:12.652157 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:26:12.659869 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:26:12.691236 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:26:12.734552 systemd-tmpfiles[1583]: ACLs are not supported, ignoring. Dec 16 12:26:12.734593 systemd-tmpfiles[1583]: ACLs are not supported, ignoring. Dec 16 12:26:12.749179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:26:12.909191 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:26:12.937171 kernel: loop1: detected capacity change from 0 to 61264 Dec 16 12:26:13.002081 kernel: loop2: detected capacity change from 0 to 100632 Dec 16 12:26:13.068057 kernel: loop3: detected capacity change from 0 to 119840 Dec 16 12:26:13.130070 kernel: loop4: detected capacity change from 0 to 211168 Dec 16 12:26:13.158082 kernel: loop5: detected capacity change from 0 to 61264 Dec 16 12:26:13.183080 kernel: loop6: detected capacity change from 0 to 100632 Dec 16 12:26:13.223076 kernel: loop7: detected capacity change from 0 to 119840 Dec 16 12:26:13.246874 (sd-merge)[1596]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 16 12:26:13.249178 (sd-merge)[1596]: Merged extensions into '/usr'. Dec 16 12:26:13.261354 systemd[1]: Reload requested from client PID 1573 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:26:13.261380 systemd[1]: Reloading... Dec 16 12:26:13.270084 ldconfig[1568]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:26:13.438064 zram_generator::config[1634]: No configuration found. Dec 16 12:26:13.855415 systemd[1]: Reloading finished in 593 ms. Dec 16 12:26:13.895642 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:26:13.899099 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:26:13.903233 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:26:13.920112 systemd[1]: Starting ensure-sysext.service... Dec 16 12:26:13.930309 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:26:13.940076 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:26:13.968270 systemd[1]: Reload requested from client PID 1676 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:26:13.968316 systemd[1]: Reloading... Dec 16 12:26:14.015852 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:26:14.016626 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:26:14.017872 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:26:14.020762 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:26:14.025205 systemd-tmpfiles[1677]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:26:14.025909 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Dec 16 12:26:14.028190 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Dec 16 12:26:14.044552 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:26:14.044757 systemd-tmpfiles[1677]: Skipping /boot Dec 16 12:26:14.046052 systemd-udevd[1678]: Using default interface naming scheme 'v255'. Dec 16 12:26:14.086338 systemd-tmpfiles[1677]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:26:14.086553 systemd-tmpfiles[1677]: Skipping /boot Dec 16 12:26:14.194226 zram_generator::config[1708]: No configuration found. Dec 16 12:26:14.539304 (udev-worker)[1725]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:26:14.902966 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:26:14.903298 systemd[1]: Reloading finished in 934 ms. Dec 16 12:26:14.985355 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:26:14.990051 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:26:15.088819 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:26:15.099446 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:26:15.110444 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:26:15.119551 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:26:15.136318 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:26:15.146213 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:26:15.167733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:26:15.228859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:26:15.240948 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:26:15.250171 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:26:15.257543 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:26:15.257941 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:26:15.282212 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:26:15.293271 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:15.337446 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:26:15.338994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:26:15.339499 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:26:15.361157 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:26:15.392630 augenrules[1909]: No rules Dec 16 12:26:15.397771 systemd[1]: Finished ensure-sysext.service. Dec 16 12:26:15.402408 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:26:15.404193 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:26:15.413140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:26:15.421414 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:26:15.424129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:26:15.424412 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:26:15.424756 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:26:15.457645 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:26:15.463512 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:26:15.486260 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:26:15.489301 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:26:15.491215 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:26:15.491697 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:26:15.516386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:26:15.517723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:26:15.523064 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:26:15.524184 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:26:15.527862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:26:15.529188 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:26:15.588198 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:26:15.588327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:26:15.607313 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:26:15.664565 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:26:15.672400 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:26:15.747191 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:26:15.752728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:15.757332 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:26:15.892220 systemd-networkd[1865]: lo: Link UP Dec 16 12:26:15.892779 systemd-networkd[1865]: lo: Gained carrier Dec 16 12:26:15.896578 systemd-networkd[1865]: Enumeration completed Dec 16 12:26:15.897070 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:26:15.900691 systemd-networkd[1865]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:15.900700 systemd-networkd[1865]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:26:15.904748 systemd-networkd[1865]: eth0: Link UP Dec 16 12:26:15.905337 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:26:15.908601 systemd-networkd[1865]: eth0: Gained carrier Dec 16 12:26:15.908644 systemd-networkd[1865]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:15.908797 systemd-resolved[1870]: Positive Trust Anchors: Dec 16 12:26:15.908822 systemd-resolved[1870]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:26:15.908888 systemd-resolved[1870]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:26:15.912403 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:26:15.929187 systemd-networkd[1865]: eth0: DHCPv4 address 172.31.28.27/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:26:15.934350 systemd-resolved[1870]: Defaulting to hostname 'linux'. Dec 16 12:26:15.941231 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:26:15.946318 systemd[1]: Reached target network.target - Network. Dec 16 12:26:15.948670 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:26:15.951605 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:26:15.954417 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:26:15.958331 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:26:15.962544 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:26:15.966535 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:26:15.971220 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:26:15.975399 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:26:15.975471 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:26:15.979276 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:26:15.983377 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:26:15.988704 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:26:15.995859 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:26:16.000515 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:26:16.003569 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:26:16.010738 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:26:16.013958 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:26:16.018705 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:26:16.022400 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:26:16.026128 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:26:16.028833 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:26:16.032071 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:26:16.032383 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:26:16.034779 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:26:16.040327 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:26:16.049724 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:26:16.055299 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:26:16.067526 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:26:16.074614 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:26:16.077215 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:26:16.084396 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:26:16.092522 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 12:26:16.106055 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:26:16.118299 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 12:26:16.124981 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:26:16.142334 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:26:16.152763 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:26:16.157901 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:26:16.158954 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:26:16.163553 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:26:16.179070 jq[1964]: false Dec 16 12:26:16.180223 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:26:16.194697 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:26:16.198441 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:26:16.200516 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:26:16.207434 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:26:16.222480 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:26:16.328728 jq[1974]: true Dec 16 12:26:16.336237 extend-filesystems[1965]: Found /dev/nvme0n1p6 Dec 16 12:26:16.360265 ntpd[1967]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:16.363416 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:16.363416 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:16.363416 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: ---------------------------------------------------- Dec 16 12:26:16.363416 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:16.363416 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:16.363416 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: corporation. Support and training for ntp-4 are Dec 16 12:26:16.363416 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: available at https://www.nwtime.org/support Dec 16 12:26:16.363416 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: ---------------------------------------------------- Dec 16 12:26:16.360400 ntpd[1967]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:16.360422 ntpd[1967]: ---------------------------------------------------- Dec 16 12:26:16.360440 ntpd[1967]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:16.360457 ntpd[1967]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:16.360475 ntpd[1967]: corporation. Support and training for ntp-4 are Dec 16 12:26:16.360492 ntpd[1967]: available at https://www.nwtime.org/support Dec 16 12:26:16.360510 ntpd[1967]: ---------------------------------------------------- Dec 16 12:26:16.382073 ntpd[1967]: proto: precision = 0.096 usec (-23) Dec 16 12:26:16.386914 extend-filesystems[1965]: Found /dev/nvme0n1p9 Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: proto: precision = 0.096 usec (-23) Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: basedate set to 2025-11-30 Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: Listen normally on 3 eth0 172.31.28.27:123 Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: bind(21) AF_INET6 [fe80::41d:ceff:fe30:cdef%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 12:26:16.397691 ntpd[1967]: 16 Dec 12:26:16 ntpd[1967]: unable to create socket on eth0 (5) for [fe80::41d:ceff:fe30:cdef%2]:123 Dec 16 12:26:16.392761 ntpd[1967]: basedate set to 2025-11-30 Dec 16 12:26:16.402868 tar[1977]: linux-arm64/LICENSE Dec 16 12:26:16.402868 tar[1977]: linux-arm64/helm Dec 16 12:26:16.403455 update_engine[1973]: I20251216 12:26:16.386701 1973 main.cc:92] Flatcar Update Engine starting Dec 16 12:26:16.392790 ntpd[1967]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:16.392962 ntpd[1967]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:16.407122 (ntainerd)[2000]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:26:16.414440 extend-filesystems[1965]: Checking size of /dev/nvme0n1p9 Dec 16 12:26:16.393008 ntpd[1967]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:16.417075 systemd-coredump[2009]: Process 1967 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 12:26:16.393392 ntpd[1967]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:16.393450 ntpd[1967]: Listen normally on 3 eth0 172.31.28.27:123 Dec 16 12:26:16.393501 ntpd[1967]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:16.393551 ntpd[1967]: bind(21) AF_INET6 [fe80::41d:ceff:fe30:cdef%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 12:26:16.393593 ntpd[1967]: unable to create socket on eth0 (5) for [fe80::41d:ceff:fe30:cdef%2]:123 Dec 16 12:26:16.423712 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 16 12:26:16.430718 dbus-daemon[1962]: [system] SELinux support is enabled Dec 16 12:26:16.438305 systemd[1]: Started systemd-coredump@0-2009-0.service - Process Core Dump (PID 2009/UID 0). Dec 16 12:26:16.441814 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:26:16.455542 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:26:16.457157 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:26:16.478329 update_engine[1973]: I20251216 12:26:16.463754 1973 update_check_scheduler.cc:74] Next update check in 3m45s Dec 16 12:26:16.467926 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:26:16.463958 dbus-daemon[1962]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1865 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 12:26:16.468042 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:26:16.476269 dbus-daemon[1962]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:26:16.471317 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:26:16.471374 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:26:16.476533 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:26:16.509080 jq[2005]: true Dec 16 12:26:16.502612 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 12:26:16.509645 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:26:16.540077 extend-filesystems[1965]: Resized partition /dev/nvme0n1p9 Dec 16 12:26:16.559532 extend-filesystems[2023]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:26:16.589062 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 16 12:26:16.593836 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 12:26:16.674857 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:26:16.699574 coreos-metadata[1961]: Dec 16 12:26:16.697 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:26:16.703058 coreos-metadata[1961]: Dec 16 12:26:16.701 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 12:26:16.703058 coreos-metadata[1961]: Dec 16 12:26:16.702 INFO Fetch successful Dec 16 12:26:16.703058 coreos-metadata[1961]: Dec 16 12:26:16.702 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 12:26:16.706808 coreos-metadata[1961]: Dec 16 12:26:16.703 INFO Fetch successful Dec 16 12:26:16.706808 coreos-metadata[1961]: Dec 16 12:26:16.703 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 12:26:16.706808 coreos-metadata[1961]: Dec 16 12:26:16.705 INFO Fetch successful Dec 16 12:26:16.706808 coreos-metadata[1961]: Dec 16 12:26:16.706 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 12:26:16.729126 coreos-metadata[1961]: Dec 16 12:26:16.707 INFO Fetch successful Dec 16 12:26:16.729126 coreos-metadata[1961]: Dec 16 12:26:16.709 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 12:26:16.729126 coreos-metadata[1961]: Dec 16 12:26:16.710 INFO Fetch failed with 404: resource not found Dec 16 12:26:16.729126 coreos-metadata[1961]: Dec 16 12:26:16.710 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 12:26:16.729126 coreos-metadata[1961]: Dec 16 12:26:16.713 INFO Fetch successful Dec 16 12:26:16.729126 coreos-metadata[1961]: Dec 16 12:26:16.713 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 12:26:16.729126 coreos-metadata[1961]: Dec 16 12:26:16.722 INFO Fetch successful Dec 16 12:26:16.729126 coreos-metadata[1961]: Dec 16 12:26:16.722 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 12:26:16.730224 coreos-metadata[1961]: Dec 16 12:26:16.729 INFO Fetch successful Dec 16 12:26:16.730224 coreos-metadata[1961]: Dec 16 12:26:16.729 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 12:26:16.732701 coreos-metadata[1961]: Dec 16 12:26:16.732 INFO Fetch successful Dec 16 12:26:16.732701 coreos-metadata[1961]: Dec 16 12:26:16.732 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 12:26:16.743759 coreos-metadata[1961]: Dec 16 12:26:16.743 INFO Fetch successful Dec 16 12:26:16.878796 systemd-logind[1972]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:26:16.893303 systemd-logind[1972]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 16 12:26:16.894097 systemd-logind[1972]: New seat seat0. Dec 16 12:26:16.904464 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:26:16.947054 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 16 12:26:16.961833 extend-filesystems[2023]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 12:26:16.961833 extend-filesystems[2023]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 12:26:16.961833 extend-filesystems[2023]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 16 12:26:16.981293 extend-filesystems[1965]: Resized filesystem in /dev/nvme0n1p9 Dec 16 12:26:16.980068 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:26:16.984052 bash[2081]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:26:17.021814 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:26:17.029590 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:26:17.039613 systemd[1]: Starting sshkeys.service... Dec 16 12:26:17.046557 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:26:17.052757 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:26:17.158295 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:26:17.166960 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:26:17.284400 locksmithd[2020]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:26:17.438586 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 12:26:17.473370 dbus-daemon[1962]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 12:26:17.491945 dbus-daemon[1962]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2017 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 12:26:17.514121 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 12:26:17.601229 coreos-metadata[2129]: Dec 16 12:26:17.599 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:26:17.618758 coreos-metadata[2129]: Dec 16 12:26:17.617 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 12:26:17.622089 coreos-metadata[2129]: Dec 16 12:26:17.619 INFO Fetch successful Dec 16 12:26:17.622089 coreos-metadata[2129]: Dec 16 12:26:17.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 12:26:17.620487 systemd-coredump[2013]: Process 1967 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1967: #0 0x0000aaaae4a20b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaae49cfe60 n/a (ntpd + 0xfe60) #2 0x0000aaaae49d0240 n/a (ntpd + 0x10240) #3 0x0000aaaae49cbe14 n/a (ntpd + 0xbe14) #4 0x0000aaaae49cd3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaae49d5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaae49c738c n/a (ntpd + 0x738c) #7 0x0000ffffb2ed2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffffb2ed2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaae49c73f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Dec 16 12:26:17.630598 coreos-metadata[2129]: Dec 16 12:26:17.629 INFO Fetch successful Dec 16 12:26:17.632743 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 12:26:17.633280 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 12:26:17.634659 unknown[2129]: wrote ssh authorized keys file for user: core Dec 16 12:26:17.645626 systemd[1]: systemd-coredump@0-2009-0.service: Deactivated successfully. Dec 16 12:26:17.748074 update-ssh-keys[2162]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:26:17.750939 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 16 12:26:17.754191 systemd-networkd[1865]: eth0: Gained IPv6LL Dec 16 12:26:17.756700 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:26:17.762707 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:26:17.778158 systemd[1]: Finished sshkeys.service. Dec 16 12:26:17.785515 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:26:17.797976 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 12:26:17.806732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:17.815601 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 12:26:17.828958 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:26:17.906071 containerd[2000]: time="2025-12-16T12:26:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:26:17.921868 containerd[2000]: time="2025-12-16T12:26:17.917918989Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:26:17.993011 ntpd[2177]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:17.997003 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:26:17.997003 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:17.997003 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: ---------------------------------------------------- Dec 16 12:26:17.997003 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:17.997003 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:17.997003 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: corporation. Support and training for ntp-4 are Dec 16 12:26:17.997003 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: available at https://www.nwtime.org/support Dec 16 12:26:17.997003 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: ---------------------------------------------------- Dec 16 12:26:17.996235 ntpd[2177]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:26:17.997998 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: proto: precision = 0.096 usec (-23) Dec 16 12:26:17.997998 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: basedate set to 2025-11-30 Dec 16 12:26:17.997998 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:17.997998 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:17.996257 ntpd[2177]: ---------------------------------------------------- Dec 16 12:26:18.005546 ntpd[2177]: 16 Dec 12:26:17 ntpd[2177]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:18.005546 ntpd[2177]: 16 Dec 12:26:18 ntpd[2177]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:18.005546 ntpd[2177]: 16 Dec 12:26:18 ntpd[2177]: Listen normally on 3 eth0 172.31.28.27:123 Dec 16 12:26:18.005546 ntpd[2177]: 16 Dec 12:26:18 ntpd[2177]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:18.005546 ntpd[2177]: 16 Dec 12:26:18 ntpd[2177]: Listen normally on 5 eth0 [fe80::41d:ceff:fe30:cdef%2]:123 Dec 16 12:26:18.005546 ntpd[2177]: 16 Dec 12:26:18 ntpd[2177]: Listening on routing socket on fd #22 for interface updates Dec 16 12:26:17.996274 ntpd[2177]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:26:17.996292 ntpd[2177]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:26:17.996309 ntpd[2177]: corporation. Support and training for ntp-4 are Dec 16 12:26:17.996326 ntpd[2177]: available at https://www.nwtime.org/support Dec 16 12:26:17.996343 ntpd[2177]: ---------------------------------------------------- Dec 16 12:26:17.997481 ntpd[2177]: proto: precision = 0.096 usec (-23) Dec 16 12:26:17.997816 ntpd[2177]: basedate set to 2025-11-30 Dec 16 12:26:17.997839 ntpd[2177]: gps base set to 2025-11-30 (week 2395) Dec 16 12:26:17.997973 ntpd[2177]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:26:18.000213 ntpd[2177]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:26:18.001672 ntpd[2177]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:26:18.001746 ntpd[2177]: Listen normally on 3 eth0 172.31.28.27:123 Dec 16 12:26:18.001796 ntpd[2177]: Listen normally on 4 lo [::1]:123 Dec 16 12:26:18.001842 ntpd[2177]: Listen normally on 5 eth0 [fe80::41d:ceff:fe30:cdef%2]:123 Dec 16 12:26:18.001888 ntpd[2177]: Listening on routing socket on fd #22 for interface updates Dec 16 12:26:18.007290 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:26:18.016792 ntpd[2177]: 16 Dec 12:26:18 ntpd[2177]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:26:18.016792 ntpd[2177]: 16 Dec 12:26:18 ntpd[2177]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:26:18.013779 ntpd[2177]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:26:18.013829 ntpd[2177]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:26:18.033242 containerd[2000]: time="2025-12-16T12:26:18.032572858Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.732µs" Dec 16 12:26:18.033242 containerd[2000]: time="2025-12-16T12:26:18.032642818Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:26:18.033242 containerd[2000]: time="2025-12-16T12:26:18.032683222Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:26:18.033500 containerd[2000]: time="2025-12-16T12:26:18.032998438Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:26:18.033584 containerd[2000]: time="2025-12-16T12:26:18.033514126Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:26:18.033633 containerd[2000]: time="2025-12-16T12:26:18.033584314Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:26:18.033806 containerd[2000]: time="2025-12-16T12:26:18.033749674Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:26:18.033806 containerd[2000]: time="2025-12-16T12:26:18.033795466Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:26:18.034327 containerd[2000]: time="2025-12-16T12:26:18.034246414Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:26:18.034327 containerd[2000]: time="2025-12-16T12:26:18.034310002Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:26:18.034521 containerd[2000]: time="2025-12-16T12:26:18.034345150Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:26:18.034521 containerd[2000]: time="2025-12-16T12:26:18.034368946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:26:18.034618 containerd[2000]: time="2025-12-16T12:26:18.034585966Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:26:18.042045 containerd[2000]: time="2025-12-16T12:26:18.040707298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:26:18.042045 containerd[2000]: time="2025-12-16T12:26:18.040836190Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:26:18.042045 containerd[2000]: time="2025-12-16T12:26:18.040866058Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:26:18.042045 containerd[2000]: time="2025-12-16T12:26:18.040930786Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:26:18.042045 containerd[2000]: time="2025-12-16T12:26:18.041400346Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:26:18.042045 containerd[2000]: time="2025-12-16T12:26:18.041565202Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057137986Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057255946Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057301966Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057340558Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057370378Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057401302Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057432358Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057462190Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057492046Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057519310Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057543478Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057573838Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057829846Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:26:18.058337 containerd[2000]: time="2025-12-16T12:26:18.057874966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:26:18.059055 containerd[2000]: time="2025-12-16T12:26:18.057908698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:26:18.059055 containerd[2000]: time="2025-12-16T12:26:18.057940282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:26:18.059055 containerd[2000]: time="2025-12-16T12:26:18.057968818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:26:18.059055 containerd[2000]: time="2025-12-16T12:26:18.057998062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:26:18.059610 containerd[2000]: time="2025-12-16T12:26:18.059406886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:26:18.059610 containerd[2000]: time="2025-12-16T12:26:18.059501026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:26:18.059610 containerd[2000]: time="2025-12-16T12:26:18.059536930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:26:18.063174 containerd[2000]: time="2025-12-16T12:26:18.062084818Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:26:18.063174 containerd[2000]: time="2025-12-16T12:26:18.062191738Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:26:18.063174 containerd[2000]: time="2025-12-16T12:26:18.062705278Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:26:18.063174 containerd[2000]: time="2025-12-16T12:26:18.062790082Z" level=info msg="Start snapshots syncer" Dec 16 12:26:18.063174 containerd[2000]: time="2025-12-16T12:26:18.062892850Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:26:18.076052 containerd[2000]: time="2025-12-16T12:26:18.073871194Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:26:18.081828 containerd[2000]: time="2025-12-16T12:26:18.081741790Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:26:18.089302 containerd[2000]: time="2025-12-16T12:26:18.085846714Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:26:18.089302 containerd[2000]: time="2025-12-16T12:26:18.089143354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:26:18.089949 containerd[2000]: time="2025-12-16T12:26:18.089263954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:26:18.089949 containerd[2000]: time="2025-12-16T12:26:18.089775322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:26:18.089949 containerd[2000]: time="2025-12-16T12:26:18.089826094Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:26:18.089949 containerd[2000]: time="2025-12-16T12:26:18.089895754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:26:18.090372 containerd[2000]: time="2025-12-16T12:26:18.090303982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:26:18.090589 containerd[2000]: time="2025-12-16T12:26:18.090530134Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:26:18.091236 containerd[2000]: time="2025-12-16T12:26:18.091155970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:26:18.092068 containerd[2000]: time="2025-12-16T12:26:18.091733278Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:26:18.097318 containerd[2000]: time="2025-12-16T12:26:18.094107526Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:26:18.097318 containerd[2000]: time="2025-12-16T12:26:18.094283758Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:26:18.097318 containerd[2000]: time="2025-12-16T12:26:18.094522474Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:26:18.097318 containerd[2000]: time="2025-12-16T12:26:18.094560970Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:26:18.098450 containerd[2000]: time="2025-12-16T12:26:18.097659346Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:26:18.098450 containerd[2000]: time="2025-12-16T12:26:18.097743370Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:26:18.098450 containerd[2000]: time="2025-12-16T12:26:18.097782106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:26:18.098450 containerd[2000]: time="2025-12-16T12:26:18.097837246Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:26:18.098450 containerd[2000]: time="2025-12-16T12:26:18.098048986Z" level=info msg="runtime interface created" Dec 16 12:26:18.098450 containerd[2000]: time="2025-12-16T12:26:18.098071258Z" level=info msg="created NRI interface" Dec 16 12:26:18.098450 containerd[2000]: time="2025-12-16T12:26:18.098095474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:26:18.098450 containerd[2000]: time="2025-12-16T12:26:18.098157502Z" level=info msg="Connect containerd service" Dec 16 12:26:18.098450 containerd[2000]: time="2025-12-16T12:26:18.098256034Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:26:18.107959 amazon-ssm-agent[2175]: Initializing new seelog logger Dec 16 12:26:18.108503 amazon-ssm-agent[2175]: New Seelog Logger Creation Complete Dec 16 12:26:18.108503 amazon-ssm-agent[2175]: 2025/12/16 12:26:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:18.108503 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:18.111062 amazon-ssm-agent[2175]: 2025/12/16 12:26:18 processing appconfig overrides Dec 16 12:26:18.111062 amazon-ssm-agent[2175]: 2025/12/16 12:26:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:18.111062 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:18.111062 amazon-ssm-agent[2175]: 2025/12/16 12:26:18 processing appconfig overrides Dec 16 12:26:18.111324 amazon-ssm-agent[2175]: 2025/12/16 12:26:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:18.111324 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:18.111324 amazon-ssm-agent[2175]: 2025/12/16 12:26:18 processing appconfig overrides Dec 16 12:26:18.113327 containerd[2000]: time="2025-12-16T12:26:18.109088590Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:26:18.113451 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.1095 INFO Proxy environment variables: Dec 16 12:26:18.120537 amazon-ssm-agent[2175]: 2025/12/16 12:26:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:18.120537 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:18.120537 amazon-ssm-agent[2175]: 2025/12/16 12:26:18 processing appconfig overrides Dec 16 12:26:18.214776 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.1101 INFO https_proxy: Dec 16 12:26:18.240626 polkitd[2149]: Started polkitd version 126 Dec 16 12:26:18.287812 polkitd[2149]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 12:26:18.288572 polkitd[2149]: Loading rules from directory /run/polkit-1/rules.d Dec 16 12:26:18.288700 polkitd[2149]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:26:18.295086 polkitd[2149]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 12:26:18.295199 polkitd[2149]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:26:18.295302 polkitd[2149]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 12:26:18.303567 polkitd[2149]: Finished loading, compiling and executing 2 rules Dec 16 12:26:18.304419 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 12:26:18.314460 dbus-daemon[1962]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 12:26:18.315585 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.1101 INFO http_proxy: Dec 16 12:26:18.316004 polkitd[2149]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 12:26:18.388572 systemd-hostnamed[2017]: Hostname set to (transient) Dec 16 12:26:18.390115 systemd-resolved[1870]: System hostname changed to 'ip-172-31-28-27'. Dec 16 12:26:18.409863 containerd[2000]: time="2025-12-16T12:26:18.409763027Z" level=info msg="Start subscribing containerd event" Dec 16 12:26:18.410221 containerd[2000]: time="2025-12-16T12:26:18.410167067Z" level=info msg="Start recovering state" Dec 16 12:26:18.410499 containerd[2000]: time="2025-12-16T12:26:18.410467823Z" level=info msg="Start event monitor" Dec 16 12:26:18.411464 containerd[2000]: time="2025-12-16T12:26:18.411163727Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:26:18.411464 containerd[2000]: time="2025-12-16T12:26:18.411217007Z" level=info msg="Start streaming server" Dec 16 12:26:18.411464 containerd[2000]: time="2025-12-16T12:26:18.411239927Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:26:18.411464 containerd[2000]: time="2025-12-16T12:26:18.411257519Z" level=info msg="runtime interface starting up..." Dec 16 12:26:18.411464 containerd[2000]: time="2025-12-16T12:26:18.411272927Z" level=info msg="starting plugins..." Dec 16 12:26:18.411464 containerd[2000]: time="2025-12-16T12:26:18.411314723Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:26:18.412254 containerd[2000]: time="2025-12-16T12:26:18.411970320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:26:18.419685 containerd[2000]: time="2025-12-16T12:26:18.414126360Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:26:18.419685 containerd[2000]: time="2025-12-16T12:26:18.415535460Z" level=info msg="containerd successfully booted in 0.520723s" Dec 16 12:26:18.415774 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:26:18.422191 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.1102 INFO no_proxy: Dec 16 12:26:18.519113 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.1108 INFO Checking if agent identity type OnPrem can be assumed Dec 16 12:26:18.619221 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.1109 INFO Checking if agent identity type EC2 can be assumed Dec 16 12:26:18.670113 sshd_keygen[1996]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:26:18.718827 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.3666 INFO Agent will take identity from EC2 Dec 16 12:26:18.795259 tar[1977]: linux-arm64/README.md Dec 16 12:26:18.800282 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:26:18.815609 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:26:18.818232 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.3733 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 12:26:18.821782 systemd[1]: Started sshd@0-172.31.28.27:22-139.178.89.65:41744.service - OpenSSH per-connection server daemon (139.178.89.65:41744). Dec 16 12:26:18.847123 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:26:18.883126 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:26:18.883893 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:26:18.892510 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:26:18.919164 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.3733 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 16 12:26:18.941596 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:26:18.949715 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:26:18.959341 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:26:18.962558 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:26:19.017366 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.3733 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 12:26:19.027467 amazon-ssm-agent[2175]: 2025/12/16 12:26:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:19.027467 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:26:19.027667 amazon-ssm-agent[2175]: 2025/12/16 12:26:19 processing appconfig overrides Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.3733 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.3734 INFO [Registrar] Starting registrar module Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.3758 INFO [EC2Identity] Checking disk for registration info Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.3759 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.3759 INFO [EC2Identity] Generating registration keypair Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.9686 INFO [EC2Identity] Checking write access before registering Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:18.9696 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:19.0270 INFO [EC2Identity] EC2 registration was successful. Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:19.0271 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 12:26:19.059751 amazon-ssm-agent[2175]: 2025-12-16 12:26:19.0273 INFO [CredentialRefresher] credentialRefresher has started Dec 16 12:26:19.060740 amazon-ssm-agent[2175]: 2025-12-16 12:26:19.0273 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 12:26:19.060740 amazon-ssm-agent[2175]: 2025-12-16 12:26:19.0591 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 12:26:19.060740 amazon-ssm-agent[2175]: 2025-12-16 12:26:19.0594 INFO [CredentialRefresher] Credentials ready Dec 16 12:26:19.116680 sshd[2230]: Accepted publickey for core from 139.178.89.65 port 41744 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:19.117597 amazon-ssm-agent[2175]: 2025-12-16 12:26:19.0599 INFO [CredentialRefresher] Next credential rotation will be in 29.999987291 minutes Dec 16 12:26:19.120892 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:19.134984 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:26:19.140905 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:26:19.166148 systemd-logind[1972]: New session 1 of user core. Dec 16 12:26:19.184602 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:26:19.194553 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:26:19.223860 (systemd)[2244]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:26:19.229352 systemd-logind[1972]: New session c1 of user core. Dec 16 12:26:19.568422 systemd[2244]: Queued start job for default target default.target. Dec 16 12:26:19.590052 systemd[2244]: Created slice app.slice - User Application Slice. Dec 16 12:26:19.590626 systemd[2244]: Reached target paths.target - Paths. Dec 16 12:26:19.590750 systemd[2244]: Reached target timers.target - Timers. Dec 16 12:26:19.593411 systemd[2244]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:26:19.625643 systemd[2244]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:26:19.625898 systemd[2244]: Reached target sockets.target - Sockets. Dec 16 12:26:19.626626 systemd[2244]: Reached target basic.target - Basic System. Dec 16 12:26:19.626760 systemd[2244]: Reached target default.target - Main User Target. Dec 16 12:26:19.626820 systemd[2244]: Startup finished in 382ms. Dec 16 12:26:19.627282 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:26:19.639394 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:26:19.795500 systemd[1]: Started sshd@1-172.31.28.27:22-139.178.89.65:41756.service - OpenSSH per-connection server daemon (139.178.89.65:41756). Dec 16 12:26:19.996659 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 41756 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:19.999161 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:20.008178 systemd-logind[1972]: New session 2 of user core. Dec 16 12:26:20.015352 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:26:20.088302 amazon-ssm-agent[2175]: 2025-12-16 12:26:20.0879 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 12:26:20.155663 sshd[2258]: Connection closed by 139.178.89.65 port 41756 Dec 16 12:26:20.159344 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:20.169813 systemd[1]: sshd@1-172.31.28.27:22-139.178.89.65:41756.service: Deactivated successfully. Dec 16 12:26:20.178049 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:26:20.183693 systemd-logind[1972]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:26:20.188825 amazon-ssm-agent[2175]: 2025-12-16 12:26:20.0922 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2261) started Dec 16 12:26:20.205154 systemd[1]: Started sshd@2-172.31.28.27:22-139.178.89.65:41766.service - OpenSSH per-connection server daemon (139.178.89.65:41766). Dec 16 12:26:20.210699 systemd-logind[1972]: Removed session 2. Dec 16 12:26:20.290344 amazon-ssm-agent[2175]: 2025-12-16 12:26:20.0923 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 12:26:20.432152 sshd[2270]: Accepted publickey for core from 139.178.89.65 port 41766 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:20.435235 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:20.448138 systemd-logind[1972]: New session 3 of user core. Dec 16 12:26:20.452530 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:26:20.583957 sshd[2280]: Connection closed by 139.178.89.65 port 41766 Dec 16 12:26:20.584929 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:20.594831 systemd[1]: sshd@2-172.31.28.27:22-139.178.89.65:41766.service: Deactivated successfully. Dec 16 12:26:20.594910 systemd-logind[1972]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:26:20.600714 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:26:20.604701 systemd-logind[1972]: Removed session 3. Dec 16 12:26:22.412787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:22.416785 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:26:22.427329 systemd[1]: Startup finished in 3.732s (kernel) + 8.779s (initrd) + 12.032s (userspace) = 24.544s. Dec 16 12:26:22.434601 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:26:24.415640 kubelet[2290]: E1216 12:26:24.415556 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:26:24.420317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:26:24.420630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:26:24.421516 systemd[1]: kubelet.service: Consumed 1.511s CPU time, 258.4M memory peak. Dec 16 12:26:24.671807 systemd-resolved[1870]: Clock change detected. Flushing caches. Dec 16 12:26:30.295925 systemd[1]: Started sshd@3-172.31.28.27:22-139.178.89.65:60940.service - OpenSSH per-connection server daemon (139.178.89.65:60940). Dec 16 12:26:30.488493 sshd[2301]: Accepted publickey for core from 139.178.89.65 port 60940 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:30.491514 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:30.502289 systemd-logind[1972]: New session 4 of user core. Dec 16 12:26:30.508830 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:26:30.634615 sshd[2304]: Connection closed by 139.178.89.65 port 60940 Dec 16 12:26:30.635753 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:30.643362 systemd[1]: sshd@3-172.31.28.27:22-139.178.89.65:60940.service: Deactivated successfully. Dec 16 12:26:30.647794 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:26:30.652710 systemd-logind[1972]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:26:30.655129 systemd-logind[1972]: Removed session 4. Dec 16 12:26:30.670855 systemd[1]: Started sshd@4-172.31.28.27:22-139.178.89.65:60950.service - OpenSSH per-connection server daemon (139.178.89.65:60950). Dec 16 12:26:30.878035 sshd[2310]: Accepted publickey for core from 139.178.89.65 port 60950 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:30.880425 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:30.889110 systemd-logind[1972]: New session 5 of user core. Dec 16 12:26:30.902805 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:26:31.021027 sshd[2313]: Connection closed by 139.178.89.65 port 60950 Dec 16 12:26:31.021781 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:31.028810 systemd-logind[1972]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:26:31.030113 systemd[1]: sshd@4-172.31.28.27:22-139.178.89.65:60950.service: Deactivated successfully. Dec 16 12:26:31.034264 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:26:31.039704 systemd-logind[1972]: Removed session 5. Dec 16 12:26:31.059023 systemd[1]: Started sshd@5-172.31.28.27:22-139.178.89.65:60964.service - OpenSSH per-connection server daemon (139.178.89.65:60964). Dec 16 12:26:31.266077 sshd[2319]: Accepted publickey for core from 139.178.89.65 port 60964 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:31.268946 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:31.277053 systemd-logind[1972]: New session 6 of user core. Dec 16 12:26:31.290740 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:26:31.416291 sshd[2322]: Connection closed by 139.178.89.65 port 60964 Dec 16 12:26:31.417145 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:31.423811 systemd-logind[1972]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:26:31.424000 systemd[1]: sshd@5-172.31.28.27:22-139.178.89.65:60964.service: Deactivated successfully. Dec 16 12:26:31.427972 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:26:31.434545 systemd-logind[1972]: Removed session 6. Dec 16 12:26:31.452933 systemd[1]: Started sshd@6-172.31.28.27:22-139.178.89.65:60978.service - OpenSSH per-connection server daemon (139.178.89.65:60978). Dec 16 12:26:31.648407 sshd[2328]: Accepted publickey for core from 139.178.89.65 port 60978 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:31.650598 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:31.658251 systemd-logind[1972]: New session 7 of user core. Dec 16 12:26:31.669695 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:26:31.787256 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:26:31.788378 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:26:31.807631 sudo[2332]: pam_unix(sudo:session): session closed for user root Dec 16 12:26:31.831891 sshd[2331]: Connection closed by 139.178.89.65 port 60978 Dec 16 12:26:31.832931 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:31.839922 systemd-logind[1972]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:26:31.840844 systemd[1]: sshd@6-172.31.28.27:22-139.178.89.65:60978.service: Deactivated successfully. Dec 16 12:26:31.844350 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:26:31.849093 systemd-logind[1972]: Removed session 7. Dec 16 12:26:31.868873 systemd[1]: Started sshd@7-172.31.28.27:22-139.178.89.65:60992.service - OpenSSH per-connection server daemon (139.178.89.65:60992). Dec 16 12:26:32.072252 sshd[2338]: Accepted publickey for core from 139.178.89.65 port 60992 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:32.074560 sshd-session[2338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:32.083563 systemd-logind[1972]: New session 8 of user core. Dec 16 12:26:32.090778 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:26:32.194879 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:26:32.196216 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:26:32.206814 sudo[2343]: pam_unix(sudo:session): session closed for user root Dec 16 12:26:32.217397 sudo[2342]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:26:32.218325 sudo[2342]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:26:32.236963 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:26:32.306198 augenrules[2365]: No rules Dec 16 12:26:32.308706 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:26:32.309176 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:26:32.311851 sudo[2342]: pam_unix(sudo:session): session closed for user root Dec 16 12:26:32.334842 sshd[2341]: Connection closed by 139.178.89.65 port 60992 Dec 16 12:26:32.335727 sshd-session[2338]: pam_unix(sshd:session): session closed for user core Dec 16 12:26:32.344996 systemd-logind[1972]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:26:32.345885 systemd[1]: sshd@7-172.31.28.27:22-139.178.89.65:60992.service: Deactivated successfully. Dec 16 12:26:32.350558 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:26:32.354339 systemd-logind[1972]: Removed session 8. Dec 16 12:26:32.375436 systemd[1]: Started sshd@8-172.31.28.27:22-139.178.89.65:60998.service - OpenSSH per-connection server daemon (139.178.89.65:60998). Dec 16 12:26:32.570699 sshd[2374]: Accepted publickey for core from 139.178.89.65 port 60998 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:26:32.572985 sshd-session[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:26:32.581541 systemd-logind[1972]: New session 9 of user core. Dec 16 12:26:32.590730 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:26:32.694707 sudo[2378]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:26:32.695279 sudo[2378]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:26:33.215217 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:26:33.238058 (dockerd)[2396]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:26:33.617100 dockerd[2396]: time="2025-12-16T12:26:33.616924100Z" level=info msg="Starting up" Dec 16 12:26:33.618809 dockerd[2396]: time="2025-12-16T12:26:33.618744356Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:26:33.638693 dockerd[2396]: time="2025-12-16T12:26:33.638625452Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:26:33.680414 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1587304703-merged.mount: Deactivated successfully. Dec 16 12:26:33.699010 systemd[1]: var-lib-docker-metacopy\x2dcheck3133801684-merged.mount: Deactivated successfully. Dec 16 12:26:33.718889 dockerd[2396]: time="2025-12-16T12:26:33.718578032Z" level=info msg="Loading containers: start." Dec 16 12:26:33.734524 kernel: Initializing XFRM netlink socket Dec 16 12:26:34.111779 (udev-worker)[2417]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:26:34.126826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:26:34.131368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:34.213784 systemd-networkd[1865]: docker0: Link UP Dec 16 12:26:34.233132 dockerd[2396]: time="2025-12-16T12:26:34.232984387Z" level=info msg="Loading containers: done." Dec 16 12:26:34.298625 dockerd[2396]: time="2025-12-16T12:26:34.298510555Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:26:34.299487 dockerd[2396]: time="2025-12-16T12:26:34.298988851Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:26:34.299487 dockerd[2396]: time="2025-12-16T12:26:34.299188699Z" level=info msg="Initializing buildkit" Dec 16 12:26:34.370790 dockerd[2396]: time="2025-12-16T12:26:34.370597988Z" level=info msg="Completed buildkit initialization" Dec 16 12:26:34.393669 dockerd[2396]: time="2025-12-16T12:26:34.393553172Z" level=info msg="Daemon has completed initialization" Dec 16 12:26:34.394131 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:26:34.396427 dockerd[2396]: time="2025-12-16T12:26:34.394187180Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:26:34.621778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:34.636109 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:26:34.673223 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck272849553-merged.mount: Deactivated successfully. Dec 16 12:26:34.722387 kubelet[2610]: E1216 12:26:34.722261 2610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:26:34.732528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:26:34.732865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:26:34.733741 systemd[1]: kubelet.service: Consumed 376ms CPU time, 106.9M memory peak. Dec 16 12:26:35.935495 containerd[2000]: time="2025-12-16T12:26:35.934712999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:26:36.553218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154314896.mount: Deactivated successfully. Dec 16 12:26:38.084089 containerd[2000]: time="2025-12-16T12:26:38.083999734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:38.086046 containerd[2000]: time="2025-12-16T12:26:38.085943758Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Dec 16 12:26:38.088618 containerd[2000]: time="2025-12-16T12:26:38.088521634Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:38.095583 containerd[2000]: time="2025-12-16T12:26:38.095516890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:38.097954 containerd[2000]: time="2025-12-16T12:26:38.097722646Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.162949527s" Dec 16 12:26:38.097954 containerd[2000]: time="2025-12-16T12:26:38.097786378Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 12:26:38.100689 containerd[2000]: time="2025-12-16T12:26:38.100611334Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:26:39.567540 containerd[2000]: time="2025-12-16T12:26:39.567300805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:39.569098 containerd[2000]: time="2025-12-16T12:26:39.568883713Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Dec 16 12:26:39.570194 containerd[2000]: time="2025-12-16T12:26:39.570134546Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:39.576532 containerd[2000]: time="2025-12-16T12:26:39.576374330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:39.579507 containerd[2000]: time="2025-12-16T12:26:39.579178262Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.478497292s" Dec 16 12:26:39.579507 containerd[2000]: time="2025-12-16T12:26:39.579255266Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 12:26:39.579963 containerd[2000]: time="2025-12-16T12:26:39.579931718Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:26:40.788871 containerd[2000]: time="2025-12-16T12:26:40.788805952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:40.790547 containerd[2000]: time="2025-12-16T12:26:40.790495612Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Dec 16 12:26:40.792766 containerd[2000]: time="2025-12-16T12:26:40.791922820Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:40.796806 containerd[2000]: time="2025-12-16T12:26:40.796756180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:40.798615 containerd[2000]: time="2025-12-16T12:26:40.798559696Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.218575058s" Dec 16 12:26:40.798699 containerd[2000]: time="2025-12-16T12:26:40.798613012Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 12:26:40.799264 containerd[2000]: time="2025-12-16T12:26:40.799199368Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:26:42.053001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502765394.mount: Deactivated successfully. Dec 16 12:26:42.657079 containerd[2000]: time="2025-12-16T12:26:42.656983649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:42.659215 containerd[2000]: time="2025-12-16T12:26:42.659137109Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Dec 16 12:26:42.661381 containerd[2000]: time="2025-12-16T12:26:42.661298573Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:42.666884 containerd[2000]: time="2025-12-16T12:26:42.666811313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:42.668828 containerd[2000]: time="2025-12-16T12:26:42.668745713Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.868235429s" Dec 16 12:26:42.668828 containerd[2000]: time="2025-12-16T12:26:42.668817701Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 12:26:42.670098 containerd[2000]: time="2025-12-16T12:26:42.670058537Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:26:43.216702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673250150.mount: Deactivated successfully. Dec 16 12:26:44.485535 containerd[2000]: time="2025-12-16T12:26:44.485118918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:44.487373 containerd[2000]: time="2025-12-16T12:26:44.487279098Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Dec 16 12:26:44.490428 containerd[2000]: time="2025-12-16T12:26:44.490327830Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:44.497776 containerd[2000]: time="2025-12-16T12:26:44.497672490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:44.499293 containerd[2000]: time="2025-12-16T12:26:44.499024866Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.828720533s" Dec 16 12:26:44.499293 containerd[2000]: time="2025-12-16T12:26:44.499098774Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 12:26:44.500506 containerd[2000]: time="2025-12-16T12:26:44.499765746Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:26:44.885437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:26:44.888195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:45.027893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847687336.mount: Deactivated successfully. Dec 16 12:26:45.048649 containerd[2000]: time="2025-12-16T12:26:45.048541061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:45.050884 containerd[2000]: time="2025-12-16T12:26:45.050811797Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 16 12:26:45.054488 containerd[2000]: time="2025-12-16T12:26:45.053874665Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:45.060830 containerd[2000]: time="2025-12-16T12:26:45.060774173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:26:45.064172 containerd[2000]: time="2025-12-16T12:26:45.064101725Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 564.266391ms" Dec 16 12:26:45.064396 containerd[2000]: time="2025-12-16T12:26:45.064359269Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:26:45.065178 containerd[2000]: time="2025-12-16T12:26:45.065134193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:26:45.242822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:45.258212 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:26:45.328975 kubelet[2761]: E1216 12:26:45.328860 2761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:26:45.334820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:26:45.335131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:26:45.335931 systemd[1]: kubelet.service: Consumed 313ms CPU time, 107.1M memory peak. Dec 16 12:26:45.636965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667434556.mount: Deactivated successfully. Dec 16 12:26:47.785225 containerd[2000]: time="2025-12-16T12:26:47.783950290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:47.786246 containerd[2000]: time="2025-12-16T12:26:47.786187498Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Dec 16 12:26:47.788786 containerd[2000]: time="2025-12-16T12:26:47.788727358Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:47.794530 containerd[2000]: time="2025-12-16T12:26:47.794478814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:26:47.796664 containerd[2000]: time="2025-12-16T12:26:47.796604122Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.731243537s" Dec 16 12:26:47.796763 containerd[2000]: time="2025-12-16T12:26:47.796661098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 12:26:48.079715 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 12:26:55.385491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:26:55.391814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:55.431824 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:26:55.432047 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:26:55.433611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:55.439211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:55.486330 systemd[1]: Reload requested from client PID 2857 ('systemctl') (unit session-9.scope)... Dec 16 12:26:55.486370 systemd[1]: Reloading... Dec 16 12:26:55.740539 zram_generator::config[2907]: No configuration found. Dec 16 12:26:56.192176 systemd[1]: Reloading finished in 705 ms. Dec 16 12:26:56.289637 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:26:56.289829 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:26:56.291542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:56.291637 systemd[1]: kubelet.service: Consumed 236ms CPU time, 95M memory peak. Dec 16 12:26:56.295181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:26:56.639422 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:26:56.659053 (kubelet)[2965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:26:56.733726 kubelet[2965]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:26:56.733726 kubelet[2965]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:26:56.733726 kubelet[2965]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:26:56.734326 kubelet[2965]: I1216 12:26:56.733822 2965 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:26:58.325549 kubelet[2965]: I1216 12:26:58.325165 2965 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:26:58.325549 kubelet[2965]: I1216 12:26:58.325214 2965 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:26:58.326389 kubelet[2965]: I1216 12:26:58.326360 2965 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:26:58.382435 kubelet[2965]: E1216 12:26:58.382370 2965 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.27:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:26:58.387567 kubelet[2965]: I1216 12:26:58.387292 2965 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:26:58.401001 kubelet[2965]: I1216 12:26:58.400955 2965 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:26:58.407044 kubelet[2965]: I1216 12:26:58.406988 2965 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:26:58.407705 kubelet[2965]: I1216 12:26:58.407652 2965 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:26:58.407967 kubelet[2965]: I1216 12:26:58.407706 2965 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-27","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:26:58.408166 kubelet[2965]: I1216 12:26:58.408105 2965 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:26:58.408166 kubelet[2965]: I1216 12:26:58.408127 2965 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:26:58.408542 kubelet[2965]: I1216 12:26:58.408512 2965 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:58.414425 kubelet[2965]: I1216 12:26:58.414370 2965 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:26:58.414425 kubelet[2965]: I1216 12:26:58.414420 2965 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:26:58.416495 kubelet[2965]: I1216 12:26:58.416272 2965 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:26:58.418546 kubelet[2965]: I1216 12:26:58.418515 2965 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:26:58.422914 kubelet[2965]: E1216 12:26:58.422847 2965 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:26:58.423088 kubelet[2965]: E1216 12:26:58.423025 2965 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-27&limit=500&resourceVersion=0\": dial tcp 172.31.28.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:26:58.423746 kubelet[2965]: I1216 12:26:58.423693 2965 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:26:58.425004 kubelet[2965]: I1216 12:26:58.424907 2965 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:26:58.425177 kubelet[2965]: W1216 12:26:58.425141 2965 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:26:58.433003 kubelet[2965]: I1216 12:26:58.432926 2965 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:26:58.433221 kubelet[2965]: I1216 12:26:58.433165 2965 server.go:1289] "Started kubelet" Dec 16 12:26:58.439482 kubelet[2965]: I1216 12:26:58.439386 2965 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:26:58.444441 kubelet[2965]: I1216 12:26:58.444312 2965 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:26:58.445186 kubelet[2965]: I1216 12:26:58.444984 2965 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:26:58.449193 kubelet[2965]: I1216 12:26:58.449135 2965 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:26:58.454655 kubelet[2965]: E1216 12:26:58.452169 2965 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.27:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.27:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-27.1881b1cbf3d0947f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-27,UID:ip-172-31-28-27,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-27,},FirstTimestamp:2025-12-16 12:26:58.432955519 +0000 UTC m=+1.766163502,LastTimestamp:2025-12-16 12:26:58.432955519 +0000 UTC m=+1.766163502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-27,}" Dec 16 12:26:58.460602 kubelet[2965]: I1216 12:26:58.460400 2965 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:26:58.461404 kubelet[2965]: I1216 12:26:58.461098 2965 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:26:58.464183 kubelet[2965]: E1216 12:26:58.464122 2965 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-27\" not found" Dec 16 12:26:58.465033 kubelet[2965]: I1216 12:26:58.464981 2965 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:26:58.465197 kubelet[2965]: I1216 12:26:58.465162 2965 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:26:58.465282 kubelet[2965]: I1216 12:26:58.465254 2965 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:26:58.468947 kubelet[2965]: I1216 12:26:58.468601 2965 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:26:58.468947 kubelet[2965]: I1216 12:26:58.468804 2965 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:26:58.469729 kubelet[2965]: E1216 12:26:58.469691 2965 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:26:58.470929 kubelet[2965]: E1216 12:26:58.470863 2965 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:26:58.471406 kubelet[2965]: E1216 12:26:58.471358 2965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-27?timeout=10s\": dial tcp 172.31.28.27:6443: connect: connection refused" interval="200ms" Dec 16 12:26:58.474511 kubelet[2965]: I1216 12:26:58.474355 2965 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:26:58.477474 kubelet[2965]: I1216 12:26:58.477384 2965 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:26:58.515287 kubelet[2965]: I1216 12:26:58.514783 2965 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:26:58.515287 kubelet[2965]: I1216 12:26:58.514826 2965 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:26:58.515287 kubelet[2965]: I1216 12:26:58.514859 2965 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:26:58.515287 kubelet[2965]: I1216 12:26:58.514873 2965 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:26:58.515287 kubelet[2965]: E1216 12:26:58.514939 2965 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:26:58.519011 kubelet[2965]: E1216 12:26:58.518968 2965 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:26:58.525754 kubelet[2965]: I1216 12:26:58.525719 2965 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:26:58.526033 kubelet[2965]: I1216 12:26:58.525950 2965 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:26:58.526033 kubelet[2965]: I1216 12:26:58.525983 2965 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:26:58.531527 kubelet[2965]: I1216 12:26:58.531445 2965 policy_none.go:49] "None policy: Start" Dec 16 12:26:58.531527 kubelet[2965]: I1216 12:26:58.531519 2965 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:26:58.531709 kubelet[2965]: I1216 12:26:58.531543 2965 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:26:58.547216 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:26:58.565133 kubelet[2965]: E1216 12:26:58.565077 2965 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-27\" not found" Dec 16 12:26:58.571075 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:26:58.578853 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:26:58.601611 kubelet[2965]: E1216 12:26:58.601179 2965 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:26:58.601978 kubelet[2965]: I1216 12:26:58.601939 2965 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:26:58.602095 kubelet[2965]: I1216 12:26:58.601974 2965 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:26:58.604008 kubelet[2965]: I1216 12:26:58.603586 2965 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:26:58.606182 kubelet[2965]: E1216 12:26:58.606121 2965 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:26:58.606312 kubelet[2965]: E1216 12:26:58.606210 2965 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-27\" not found" Dec 16 12:26:58.639539 systemd[1]: Created slice kubepods-burstable-pod3f32de57560e22c55362b1f9646e2680.slice - libcontainer container kubepods-burstable-pod3f32de57560e22c55362b1f9646e2680.slice. Dec 16 12:26:58.656339 kubelet[2965]: E1216 12:26:58.656238 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:26:58.662811 systemd[1]: Created slice kubepods-burstable-pod0af4eb0e06671fd6b742e325c9b3ce5c.slice - libcontainer container kubepods-burstable-pod0af4eb0e06671fd6b742e325c9b3ce5c.slice. Dec 16 12:26:58.666477 kubelet[2965]: I1216 12:26:58.666168 2965 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:26:58.666477 kubelet[2965]: I1216 12:26:58.666224 2965 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:26:58.666477 kubelet[2965]: I1216 12:26:58.666262 2965 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:26:58.666477 kubelet[2965]: I1216 12:26:58.666301 2965 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07dd3abc1a5ef3b82113571fd7a33d0f-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-27\" (UID: \"07dd3abc1a5ef3b82113571fd7a33d0f\") " pod="kube-system/kube-scheduler-ip-172-31-28-27" Dec 16 12:26:58.666477 kubelet[2965]: I1216 12:26:58.666335 2965 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f32de57560e22c55362b1f9646e2680-ca-certs\") pod \"kube-apiserver-ip-172-31-28-27\" (UID: \"3f32de57560e22c55362b1f9646e2680\") " pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:26:58.666831 kubelet[2965]: I1216 12:26:58.666367 2965 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f32de57560e22c55362b1f9646e2680-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-27\" (UID: \"3f32de57560e22c55362b1f9646e2680\") " pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:26:58.666831 kubelet[2965]: I1216 12:26:58.666398 2965 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:26:58.667483 kubelet[2965]: I1216 12:26:58.666444 2965 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:26:58.667659 kubelet[2965]: I1216 12:26:58.667634 2965 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f32de57560e22c55362b1f9646e2680-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-27\" (UID: \"3f32de57560e22c55362b1f9646e2680\") " pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:26:58.673178 kubelet[2965]: E1216 12:26:58.673086 2965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-27?timeout=10s\": dial tcp 172.31.28.27:6443: connect: connection refused" interval="400ms" Dec 16 12:26:58.675366 kubelet[2965]: E1216 12:26:58.675236 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:26:58.677767 systemd[1]: Created slice kubepods-burstable-pod07dd3abc1a5ef3b82113571fd7a33d0f.slice - libcontainer container kubepods-burstable-pod07dd3abc1a5ef3b82113571fd7a33d0f.slice. Dec 16 12:26:58.681929 kubelet[2965]: E1216 12:26:58.681552 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:26:58.705231 kubelet[2965]: I1216 12:26:58.705199 2965 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-27" Dec 16 12:26:58.706131 kubelet[2965]: E1216 12:26:58.706091 2965 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.27:6443/api/v1/nodes\": dial tcp 172.31.28.27:6443: connect: connection refused" node="ip-172-31-28-27" Dec 16 12:26:58.909025 kubelet[2965]: I1216 12:26:58.908642 2965 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-27" Dec 16 12:26:58.909334 kubelet[2965]: E1216 12:26:58.909288 2965 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.27:6443/api/v1/nodes\": dial tcp 172.31.28.27:6443: connect: connection refused" node="ip-172-31-28-27" Dec 16 12:26:58.960162 containerd[2000]: time="2025-12-16T12:26:58.960100570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-27,Uid:3f32de57560e22c55362b1f9646e2680,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:58.977475 containerd[2000]: time="2025-12-16T12:26:58.977260522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-27,Uid:0af4eb0e06671fd6b742e325c9b3ce5c,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:58.983976 containerd[2000]: time="2025-12-16T12:26:58.983896090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-27,Uid:07dd3abc1a5ef3b82113571fd7a33d0f,Namespace:kube-system,Attempt:0,}" Dec 16 12:26:59.011315 containerd[2000]: time="2025-12-16T12:26:59.011161650Z" level=info msg="connecting to shim 102d3180140806749fcb03e97dbe54815e81b715f6d8446a6fecc2b1760d9528" address="unix:///run/containerd/s/29eeb40e537a8198e16446961be28672bb18d8dc91886b4d5826365648164348" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:59.074257 kubelet[2965]: E1216 12:26:59.074163 2965 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-27?timeout=10s\": dial tcp 172.31.28.27:6443: connect: connection refused" interval="800ms" Dec 16 12:26:59.082765 containerd[2000]: time="2025-12-16T12:26:59.082638762Z" level=info msg="connecting to shim cba10619cf19007fbe0659e1c15c4f1166cd5fbf190fadb0e52aa3c4ebbfc129" address="unix:///run/containerd/s/91879df58b40127b7dd775a68a88da22566b1ce12605d5b810df6ef7b45c5af6" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:59.085806 systemd[1]: Started cri-containerd-102d3180140806749fcb03e97dbe54815e81b715f6d8446a6fecc2b1760d9528.scope - libcontainer container 102d3180140806749fcb03e97dbe54815e81b715f6d8446a6fecc2b1760d9528. Dec 16 12:26:59.112695 containerd[2000]: time="2025-12-16T12:26:59.112591867Z" level=info msg="connecting to shim dc39f0787a4e48b9d5e382f1642b726ee8e2d2eb9f64fc43c2173d8610e2b78f" address="unix:///run/containerd/s/cef3dea94a30dd72bc78f6139cb67c90474ac90e013773463ace2741f6c92492" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:26:59.177963 systemd[1]: Started cri-containerd-cba10619cf19007fbe0659e1c15c4f1166cd5fbf190fadb0e52aa3c4ebbfc129.scope - libcontainer container cba10619cf19007fbe0659e1c15c4f1166cd5fbf190fadb0e52aa3c4ebbfc129. Dec 16 12:26:59.208764 systemd[1]: Started cri-containerd-dc39f0787a4e48b9d5e382f1642b726ee8e2d2eb9f64fc43c2173d8610e2b78f.scope - libcontainer container dc39f0787a4e48b9d5e382f1642b726ee8e2d2eb9f64fc43c2173d8610e2b78f. Dec 16 12:26:59.242386 containerd[2000]: time="2025-12-16T12:26:59.242302639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-27,Uid:3f32de57560e22c55362b1f9646e2680,Namespace:kube-system,Attempt:0,} returns sandbox id \"102d3180140806749fcb03e97dbe54815e81b715f6d8446a6fecc2b1760d9528\"" Dec 16 12:26:59.256510 containerd[2000]: time="2025-12-16T12:26:59.255815479Z" level=info msg="CreateContainer within sandbox \"102d3180140806749fcb03e97dbe54815e81b715f6d8446a6fecc2b1760d9528\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:26:59.281673 containerd[2000]: time="2025-12-16T12:26:59.279412231Z" level=info msg="Container 1646845ea4dc200293ae1d3203da8df4e8d32b720824e04bbc1bba3a7b4d4773: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:59.301110 containerd[2000]: time="2025-12-16T12:26:59.301011224Z" level=info msg="CreateContainer within sandbox \"102d3180140806749fcb03e97dbe54815e81b715f6d8446a6fecc2b1760d9528\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1646845ea4dc200293ae1d3203da8df4e8d32b720824e04bbc1bba3a7b4d4773\"" Dec 16 12:26:59.305131 containerd[2000]: time="2025-12-16T12:26:59.305051744Z" level=info msg="StartContainer for \"1646845ea4dc200293ae1d3203da8df4e8d32b720824e04bbc1bba3a7b4d4773\"" Dec 16 12:26:59.315416 kubelet[2965]: I1216 12:26:59.315357 2965 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-27" Dec 16 12:26:59.316643 kubelet[2965]: E1216 12:26:59.316589 2965 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.27:6443/api/v1/nodes\": dial tcp 172.31.28.27:6443: connect: connection refused" node="ip-172-31-28-27" Dec 16 12:26:59.319000 containerd[2000]: time="2025-12-16T12:26:59.318927608Z" level=info msg="connecting to shim 1646845ea4dc200293ae1d3203da8df4e8d32b720824e04bbc1bba3a7b4d4773" address="unix:///run/containerd/s/29eeb40e537a8198e16446961be28672bb18d8dc91886b4d5826365648164348" protocol=ttrpc version=3 Dec 16 12:26:59.346290 containerd[2000]: time="2025-12-16T12:26:59.346212704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-27,Uid:07dd3abc1a5ef3b82113571fd7a33d0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cba10619cf19007fbe0659e1c15c4f1166cd5fbf190fadb0e52aa3c4ebbfc129\"" Dec 16 12:26:59.357219 containerd[2000]: time="2025-12-16T12:26:59.357142784Z" level=info msg="CreateContainer within sandbox \"cba10619cf19007fbe0659e1c15c4f1166cd5fbf190fadb0e52aa3c4ebbfc129\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:26:59.375820 containerd[2000]: time="2025-12-16T12:26:59.375745868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-27,Uid:0af4eb0e06671fd6b742e325c9b3ce5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc39f0787a4e48b9d5e382f1642b726ee8e2d2eb9f64fc43c2173d8610e2b78f\"" Dec 16 12:26:59.382956 systemd[1]: Started cri-containerd-1646845ea4dc200293ae1d3203da8df4e8d32b720824e04bbc1bba3a7b4d4773.scope - libcontainer container 1646845ea4dc200293ae1d3203da8df4e8d32b720824e04bbc1bba3a7b4d4773. Dec 16 12:26:59.388808 containerd[2000]: time="2025-12-16T12:26:59.388716044Z" level=info msg="Container 9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:59.393444 containerd[2000]: time="2025-12-16T12:26:59.393348344Z" level=info msg="CreateContainer within sandbox \"dc39f0787a4e48b9d5e382f1642b726ee8e2d2eb9f64fc43c2173d8610e2b78f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:26:59.418519 containerd[2000]: time="2025-12-16T12:26:59.418272668Z" level=info msg="CreateContainer within sandbox \"cba10619cf19007fbe0659e1c15c4f1166cd5fbf190fadb0e52aa3c4ebbfc129\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701\"" Dec 16 12:26:59.419622 containerd[2000]: time="2025-12-16T12:26:59.419576024Z" level=info msg="Container 1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:26:59.421610 containerd[2000]: time="2025-12-16T12:26:59.419901944Z" level=info msg="StartContainer for \"9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701\"" Dec 16 12:26:59.424482 containerd[2000]: time="2025-12-16T12:26:59.424410584Z" level=info msg="connecting to shim 9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701" address="unix:///run/containerd/s/91879df58b40127b7dd775a68a88da22566b1ce12605d5b810df6ef7b45c5af6" protocol=ttrpc version=3 Dec 16 12:26:59.444385 containerd[2000]: time="2025-12-16T12:26:59.444182312Z" level=info msg="CreateContainer within sandbox \"dc39f0787a4e48b9d5e382f1642b726ee8e2d2eb9f64fc43c2173d8610e2b78f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192\"" Dec 16 12:26:59.449065 containerd[2000]: time="2025-12-16T12:26:59.449005904Z" level=info msg="StartContainer for \"1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192\"" Dec 16 12:26:59.454411 containerd[2000]: time="2025-12-16T12:26:59.454340588Z" level=info msg="connecting to shim 1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192" address="unix:///run/containerd/s/cef3dea94a30dd72bc78f6139cb67c90474ac90e013773463ace2741f6c92492" protocol=ttrpc version=3 Dec 16 12:26:59.476857 systemd[1]: Started cri-containerd-9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701.scope - libcontainer container 9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701. Dec 16 12:26:59.487262 kubelet[2965]: E1216 12:26:59.486848 2965 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-27&limit=500&resourceVersion=0\": dial tcp 172.31.28.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:26:59.509780 kubelet[2965]: E1216 12:26:59.508274 2965 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:26:59.540210 kubelet[2965]: E1216 12:26:59.540163 2965 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:26:59.549769 systemd[1]: Started cri-containerd-1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192.scope - libcontainer container 1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192. Dec 16 12:26:59.573806 containerd[2000]: time="2025-12-16T12:26:59.573742005Z" level=info msg="StartContainer for \"1646845ea4dc200293ae1d3203da8df4e8d32b720824e04bbc1bba3a7b4d4773\" returns successfully" Dec 16 12:26:59.668766 containerd[2000]: time="2025-12-16T12:26:59.668512773Z" level=info msg="StartContainer for \"9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701\" returns successfully" Dec 16 12:26:59.737145 containerd[2000]: time="2025-12-16T12:26:59.734887102Z" level=info msg="StartContainer for \"1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192\" returns successfully" Dec 16 12:27:00.121724 kubelet[2965]: I1216 12:27:00.120549 2965 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-27" Dec 16 12:27:00.582469 kubelet[2965]: E1216 12:27:00.582331 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:00.589835 kubelet[2965]: E1216 12:27:00.589756 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:00.595481 kubelet[2965]: E1216 12:27:00.595346 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:01.558582 update_engine[1973]: I20251216 12:27:01.558492 1973 update_attempter.cc:509] Updating boot flags... Dec 16 12:27:01.602545 kubelet[2965]: E1216 12:27:01.601748 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:01.602545 kubelet[2965]: E1216 12:27:01.602350 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:01.608533 kubelet[2965]: E1216 12:27:01.606434 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:02.618795 kubelet[2965]: E1216 12:27:02.616198 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:02.620166 kubelet[2965]: E1216 12:27:02.620116 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:02.626565 kubelet[2965]: E1216 12:27:02.623039 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:03.611597 kubelet[2965]: E1216 12:27:03.611370 2965 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:05.232395 kubelet[2965]: E1216 12:27:05.232307 2965 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-27\" not found" node="ip-172-31-28-27" Dec 16 12:27:05.332686 kubelet[2965]: I1216 12:27:05.332570 2965 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-27" Dec 16 12:27:05.365511 kubelet[2965]: I1216 12:27:05.365043 2965 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:27:05.403733 kubelet[2965]: E1216 12:27:05.403675 2965 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-27\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:27:05.403940 kubelet[2965]: I1216 12:27:05.403919 2965 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:27:05.416176 kubelet[2965]: E1216 12:27:05.415928 2965 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-27\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:27:05.416176 kubelet[2965]: I1216 12:27:05.415972 2965 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-27" Dec 16 12:27:05.425820 kubelet[2965]: E1216 12:27:05.425757 2965 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-27\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-27" Dec 16 12:27:05.430108 kubelet[2965]: I1216 12:27:05.428622 2965 apiserver.go:52] "Watching apiserver" Dec 16 12:27:05.465325 kubelet[2965]: I1216 12:27:05.465270 2965 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:27:07.640506 systemd[1]: Reload requested from client PID 3433 ('systemctl') (unit session-9.scope)... Dec 16 12:27:07.640990 systemd[1]: Reloading... Dec 16 12:27:07.860567 zram_generator::config[3483]: No configuration found. Dec 16 12:27:08.360314 systemd[1]: Reloading finished in 718 ms. Dec 16 12:27:08.425553 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:08.442831 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:27:08.443532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:08.443609 systemd[1]: kubelet.service: Consumed 2.606s CPU time, 129.4M memory peak. Dec 16 12:27:08.448493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:08.825504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:08.840536 (kubelet)[3537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:27:08.952967 kubelet[3537]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:27:08.955145 kubelet[3537]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:27:08.955145 kubelet[3537]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:27:08.955145 kubelet[3537]: I1216 12:27:08.953727 3537 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:27:08.972289 kubelet[3537]: I1216 12:27:08.971967 3537 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:27:08.972565 kubelet[3537]: I1216 12:27:08.972521 3537 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:27:08.974150 kubelet[3537]: I1216 12:27:08.974072 3537 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:27:08.983679 kubelet[3537]: I1216 12:27:08.983587 3537 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:27:08.990628 kubelet[3537]: I1216 12:27:08.990568 3537 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:27:09.005493 kubelet[3537]: I1216 12:27:09.003897 3537 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:27:09.010594 kubelet[3537]: I1216 12:27:09.010559 3537 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:27:09.011337 kubelet[3537]: I1216 12:27:09.011293 3537 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:27:09.011714 kubelet[3537]: I1216 12:27:09.011441 3537 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-27","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:27:09.011938 kubelet[3537]: I1216 12:27:09.011916 3537 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:27:09.012044 kubelet[3537]: I1216 12:27:09.012027 3537 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:27:09.012193 kubelet[3537]: I1216 12:27:09.012176 3537 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:27:09.012587 kubelet[3537]: I1216 12:27:09.012566 3537 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:27:09.013693 kubelet[3537]: I1216 12:27:09.013569 3537 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:27:09.014026 kubelet[3537]: I1216 12:27:09.013951 3537 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:27:09.014168 kubelet[3537]: I1216 12:27:09.014150 3537 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:27:09.019696 kubelet[3537]: I1216 12:27:09.019660 3537 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:27:09.021137 kubelet[3537]: I1216 12:27:09.020826 3537 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:27:09.032842 kubelet[3537]: I1216 12:27:09.032803 3537 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:27:09.033091 kubelet[3537]: I1216 12:27:09.033072 3537 server.go:1289] "Started kubelet" Dec 16 12:27:09.058537 kubelet[3537]: I1216 12:27:09.058503 3537 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:27:09.062483 kubelet[3537]: I1216 12:27:09.061364 3537 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:27:09.084830 kubelet[3537]: I1216 12:27:09.084682 3537 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:27:09.088503 kubelet[3537]: I1216 12:27:09.066848 3537 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:27:09.091547 kubelet[3537]: I1216 12:27:09.065252 3537 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:27:09.091547 kubelet[3537]: I1216 12:27:09.090688 3537 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:27:09.091547 kubelet[3537]: E1216 12:27:09.073233 3537 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-27\" not found" Dec 16 12:27:09.091547 kubelet[3537]: I1216 12:27:09.072992 3537 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:27:09.091853 kubelet[3537]: I1216 12:27:09.073013 3537 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:27:09.092371 kubelet[3537]: I1216 12:27:09.092008 3537 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:27:09.099000 kubelet[3537]: I1216 12:27:09.098961 3537 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:27:09.112234 kubelet[3537]: I1216 12:27:09.112153 3537 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:27:09.133755 kubelet[3537]: E1216 12:27:09.133668 3537 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:27:09.141103 kubelet[3537]: I1216 12:27:09.140712 3537 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:27:09.162538 kubelet[3537]: I1216 12:27:09.162418 3537 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:27:09.167705 kubelet[3537]: I1216 12:27:09.167664 3537 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:27:09.168415 kubelet[3537]: I1216 12:27:09.167893 3537 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:27:09.168415 kubelet[3537]: I1216 12:27:09.167932 3537 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:27:09.168415 kubelet[3537]: I1216 12:27:09.167946 3537 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:27:09.168415 kubelet[3537]: E1216 12:27:09.168013 3537 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:27:09.267937 kubelet[3537]: I1216 12:27:09.267895 3537 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:27:09.268486 kubelet[3537]: I1216 12:27:09.268374 3537 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:27:09.268486 kubelet[3537]: I1216 12:27:09.268416 3537 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:27:09.268877 kubelet[3537]: I1216 12:27:09.268852 3537 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:27:09.269016 kubelet[3537]: I1216 12:27:09.268963 3537 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:27:09.269111 kubelet[3537]: I1216 12:27:09.269095 3537 policy_none.go:49] "None policy: Start" Dec 16 12:27:09.269233 kubelet[3537]: I1216 12:27:09.269215 3537 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:27:09.269474 kubelet[3537]: I1216 12:27:09.269323 3537 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:27:09.269654 kubelet[3537]: E1216 12:27:09.268232 3537 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:27:09.269654 kubelet[3537]: I1216 12:27:09.269628 3537 state_mem.go:75] "Updated machine memory state" Dec 16 12:27:09.280212 kubelet[3537]: E1216 12:27:09.280125 3537 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:27:09.280487 kubelet[3537]: I1216 12:27:09.280415 3537 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:27:09.280570 kubelet[3537]: I1216 12:27:09.280448 3537 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:27:09.281561 kubelet[3537]: I1216 12:27:09.281086 3537 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:27:09.287009 kubelet[3537]: E1216 12:27:09.286842 3537 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:27:09.408598 kubelet[3537]: I1216 12:27:09.408136 3537 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-27" Dec 16 12:27:09.430691 kubelet[3537]: I1216 12:27:09.430632 3537 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-27" Dec 16 12:27:09.430840 kubelet[3537]: I1216 12:27:09.430758 3537 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-27" Dec 16 12:27:09.473497 kubelet[3537]: I1216 12:27:09.472582 3537 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-27" Dec 16 12:27:09.473497 kubelet[3537]: I1216 12:27:09.473263 3537 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:27:09.474011 kubelet[3537]: I1216 12:27:09.473979 3537 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:27:09.496076 kubelet[3537]: I1216 12:27:09.495638 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:27:09.496076 kubelet[3537]: I1216 12:27:09.495703 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f32de57560e22c55362b1f9646e2680-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-27\" (UID: \"3f32de57560e22c55362b1f9646e2680\") " pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:27:09.496076 kubelet[3537]: I1216 12:27:09.495744 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:27:09.496076 kubelet[3537]: I1216 12:27:09.495844 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:27:09.496076 kubelet[3537]: I1216 12:27:09.495913 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:27:09.496442 kubelet[3537]: I1216 12:27:09.495984 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07dd3abc1a5ef3b82113571fd7a33d0f-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-27\" (UID: \"07dd3abc1a5ef3b82113571fd7a33d0f\") " pod="kube-system/kube-scheduler-ip-172-31-28-27" Dec 16 12:27:09.496442 kubelet[3537]: I1216 12:27:09.496067 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f32de57560e22c55362b1f9646e2680-ca-certs\") pod \"kube-apiserver-ip-172-31-28-27\" (UID: \"3f32de57560e22c55362b1f9646e2680\") " pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:27:09.496442 kubelet[3537]: I1216 12:27:09.496113 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f32de57560e22c55362b1f9646e2680-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-27\" (UID: \"3f32de57560e22c55362b1f9646e2680\") " pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:27:09.496442 kubelet[3537]: I1216 12:27:09.496177 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0af4eb0e06671fd6b742e325c9b3ce5c-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-27\" (UID: \"0af4eb0e06671fd6b742e325c9b3ce5c\") " pod="kube-system/kube-controller-manager-ip-172-31-28-27" Dec 16 12:27:10.033078 kubelet[3537]: I1216 12:27:10.032852 3537 apiserver.go:52] "Watching apiserver" Dec 16 12:27:10.092865 kubelet[3537]: I1216 12:27:10.092710 3537 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:27:10.209547 kubelet[3537]: I1216 12:27:10.209489 3537 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:27:10.230059 kubelet[3537]: E1216 12:27:10.230004 3537 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-27\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-27" Dec 16 12:27:10.259314 kubelet[3537]: I1216 12:27:10.259182 3537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-27" podStartSLOduration=1.259158858 podStartE2EDuration="1.259158858s" podCreationTimestamp="2025-12-16 12:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:10.25621935 +0000 UTC m=+1.403409548" watchObservedRunningTime="2025-12-16 12:27:10.259158858 +0000 UTC m=+1.406349068" Dec 16 12:27:10.293623 kubelet[3537]: I1216 12:27:10.293443 3537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-27" podStartSLOduration=1.293425026 podStartE2EDuration="1.293425026s" podCreationTimestamp="2025-12-16 12:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:10.27661119 +0000 UTC m=+1.423801400" watchObservedRunningTime="2025-12-16 12:27:10.293425026 +0000 UTC m=+1.440615236" Dec 16 12:27:10.296151 kubelet[3537]: I1216 12:27:10.295649 3537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-27" podStartSLOduration=1.295606446 podStartE2EDuration="1.295606446s" podCreationTimestamp="2025-12-16 12:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:10.293368638 +0000 UTC m=+1.440558836" watchObservedRunningTime="2025-12-16 12:27:10.295606446 +0000 UTC m=+1.442796728" Dec 16 12:27:14.093391 kubelet[3537]: I1216 12:27:14.093306 3537 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:27:14.095237 containerd[2000]: time="2025-12-16T12:27:14.094638549Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:27:14.095762 kubelet[3537]: I1216 12:27:14.094968 3537 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:27:15.198287 systemd[1]: Created slice kubepods-besteffort-pod3c5c4aff_11e3_412e_97ba_722e4acae818.slice - libcontainer container kubepods-besteffort-pod3c5c4aff_11e3_412e_97ba_722e4acae818.slice. Dec 16 12:27:15.234201 kubelet[3537]: I1216 12:27:15.233529 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c5c4aff-11e3-412e-97ba-722e4acae818-lib-modules\") pod \"kube-proxy-h4bl5\" (UID: \"3c5c4aff-11e3-412e-97ba-722e4acae818\") " pod="kube-system/kube-proxy-h4bl5" Dec 16 12:27:15.234201 kubelet[3537]: I1216 12:27:15.233657 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddz6t\" (UniqueName: \"kubernetes.io/projected/3c5c4aff-11e3-412e-97ba-722e4acae818-kube-api-access-ddz6t\") pod \"kube-proxy-h4bl5\" (UID: \"3c5c4aff-11e3-412e-97ba-722e4acae818\") " pod="kube-system/kube-proxy-h4bl5" Dec 16 12:27:15.234201 kubelet[3537]: I1216 12:27:15.233782 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c5c4aff-11e3-412e-97ba-722e4acae818-xtables-lock\") pod \"kube-proxy-h4bl5\" (UID: \"3c5c4aff-11e3-412e-97ba-722e4acae818\") " pod="kube-system/kube-proxy-h4bl5" Dec 16 12:27:15.234201 kubelet[3537]: I1216 12:27:15.233855 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c5c4aff-11e3-412e-97ba-722e4acae818-kube-proxy\") pod \"kube-proxy-h4bl5\" (UID: \"3c5c4aff-11e3-412e-97ba-722e4acae818\") " pod="kube-system/kube-proxy-h4bl5" Dec 16 12:27:15.381263 systemd[1]: Created slice kubepods-besteffort-podc20ba251_1a94_4344_a4e8_294dd4c4b4ea.slice - libcontainer container kubepods-besteffort-podc20ba251_1a94_4344_a4e8_294dd4c4b4ea.slice. Dec 16 12:27:15.438013 kubelet[3537]: I1216 12:27:15.437955 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c20ba251-1a94-4344-a4e8-294dd4c4b4ea-var-lib-calico\") pod \"tigera-operator-7dcd859c48-b4jvt\" (UID: \"c20ba251-1a94-4344-a4e8-294dd4c4b4ea\") " pod="tigera-operator/tigera-operator-7dcd859c48-b4jvt" Dec 16 12:27:15.438197 kubelet[3537]: I1216 12:27:15.438049 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dh7w\" (UniqueName: \"kubernetes.io/projected/c20ba251-1a94-4344-a4e8-294dd4c4b4ea-kube-api-access-5dh7w\") pod \"tigera-operator-7dcd859c48-b4jvt\" (UID: \"c20ba251-1a94-4344-a4e8-294dd4c4b4ea\") " pod="tigera-operator/tigera-operator-7dcd859c48-b4jvt" Dec 16 12:27:15.512317 containerd[2000]: time="2025-12-16T12:27:15.512174916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h4bl5,Uid:3c5c4aff-11e3-412e-97ba-722e4acae818,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:15.578613 containerd[2000]: time="2025-12-16T12:27:15.577054188Z" level=info msg="connecting to shim f5bf5b44772d8e5d1f7fecbce15d06265a9b4bee497eba611cd6d8436440842b" address="unix:///run/containerd/s/3bcd215be9403d616a3e68a6760f295abfa3c8e96f85c419da030b2ac70a32c2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:15.647798 systemd[1]: Started cri-containerd-f5bf5b44772d8e5d1f7fecbce15d06265a9b4bee497eba611cd6d8436440842b.scope - libcontainer container f5bf5b44772d8e5d1f7fecbce15d06265a9b4bee497eba611cd6d8436440842b. Dec 16 12:27:15.708965 containerd[2000]: time="2025-12-16T12:27:15.708909805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-b4jvt,Uid:c20ba251-1a94-4344-a4e8-294dd4c4b4ea,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:27:15.711412 containerd[2000]: time="2025-12-16T12:27:15.711357673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h4bl5,Uid:3c5c4aff-11e3-412e-97ba-722e4acae818,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5bf5b44772d8e5d1f7fecbce15d06265a9b4bee497eba611cd6d8436440842b\"" Dec 16 12:27:15.723836 containerd[2000]: time="2025-12-16T12:27:15.723762445Z" level=info msg="CreateContainer within sandbox \"f5bf5b44772d8e5d1f7fecbce15d06265a9b4bee497eba611cd6d8436440842b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:27:15.758487 containerd[2000]: time="2025-12-16T12:27:15.758389045Z" level=info msg="Container 52ecb6422bbb69d941875ea6275e1f8ab758e47b5833df042c1d49f9651a32a0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:15.764181 containerd[2000]: time="2025-12-16T12:27:15.762359989Z" level=info msg="connecting to shim ff750849f53014f710ebb65ed3d9f64e4a9eabde1b69cf68629bd7055eca481d" address="unix:///run/containerd/s/2662cae030359e36bacc88121a9a81a0efdb92b14003a4ab54c9e450e4cb53cd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:15.778265 containerd[2000]: time="2025-12-16T12:27:15.778180345Z" level=info msg="CreateContainer within sandbox \"f5bf5b44772d8e5d1f7fecbce15d06265a9b4bee497eba611cd6d8436440842b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"52ecb6422bbb69d941875ea6275e1f8ab758e47b5833df042c1d49f9651a32a0\"" Dec 16 12:27:15.780299 containerd[2000]: time="2025-12-16T12:27:15.780239101Z" level=info msg="StartContainer for \"52ecb6422bbb69d941875ea6275e1f8ab758e47b5833df042c1d49f9651a32a0\"" Dec 16 12:27:15.784331 containerd[2000]: time="2025-12-16T12:27:15.783405601Z" level=info msg="connecting to shim 52ecb6422bbb69d941875ea6275e1f8ab758e47b5833df042c1d49f9651a32a0" address="unix:///run/containerd/s/3bcd215be9403d616a3e68a6760f295abfa3c8e96f85c419da030b2ac70a32c2" protocol=ttrpc version=3 Dec 16 12:27:15.820982 systemd[1]: Started cri-containerd-ff750849f53014f710ebb65ed3d9f64e4a9eabde1b69cf68629bd7055eca481d.scope - libcontainer container ff750849f53014f710ebb65ed3d9f64e4a9eabde1b69cf68629bd7055eca481d. Dec 16 12:27:15.830976 systemd[1]: Started cri-containerd-52ecb6422bbb69d941875ea6275e1f8ab758e47b5833df042c1d49f9651a32a0.scope - libcontainer container 52ecb6422bbb69d941875ea6275e1f8ab758e47b5833df042c1d49f9651a32a0. Dec 16 12:27:15.940663 containerd[2000]: time="2025-12-16T12:27:15.940486562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-b4jvt,Uid:c20ba251-1a94-4344-a4e8-294dd4c4b4ea,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ff750849f53014f710ebb65ed3d9f64e4a9eabde1b69cf68629bd7055eca481d\"" Dec 16 12:27:15.946231 containerd[2000]: time="2025-12-16T12:27:15.945751370Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:27:15.970179 containerd[2000]: time="2025-12-16T12:27:15.970123394Z" level=info msg="StartContainer for \"52ecb6422bbb69d941875ea6275e1f8ab758e47b5833df042c1d49f9651a32a0\" returns successfully" Dec 16 12:27:16.264302 kubelet[3537]: I1216 12:27:16.264076 3537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h4bl5" podStartSLOduration=1.264050232 podStartE2EDuration="1.264050232s" podCreationTimestamp="2025-12-16 12:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:16.263566008 +0000 UTC m=+7.410756218" watchObservedRunningTime="2025-12-16 12:27:16.264050232 +0000 UTC m=+7.411240430" Dec 16 12:27:17.348588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3412473888.mount: Deactivated successfully. Dec 16 12:27:18.869168 containerd[2000]: time="2025-12-16T12:27:18.869042801Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:18.871576 containerd[2000]: time="2025-12-16T12:27:18.871440581Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 16 12:27:18.874501 containerd[2000]: time="2025-12-16T12:27:18.874413533Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:18.880365 containerd[2000]: time="2025-12-16T12:27:18.880302557Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:18.883004 containerd[2000]: time="2025-12-16T12:27:18.882776909Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.936607879s" Dec 16 12:27:18.883004 containerd[2000]: time="2025-12-16T12:27:18.882842105Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:27:18.892334 containerd[2000]: time="2025-12-16T12:27:18.892254149Z" level=info msg="CreateContainer within sandbox \"ff750849f53014f710ebb65ed3d9f64e4a9eabde1b69cf68629bd7055eca481d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:27:18.909421 containerd[2000]: time="2025-12-16T12:27:18.909354473Z" level=info msg="Container e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:18.916241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246170316.mount: Deactivated successfully. Dec 16 12:27:18.926853 containerd[2000]: time="2025-12-16T12:27:18.926712041Z" level=info msg="CreateContainer within sandbox \"ff750849f53014f710ebb65ed3d9f64e4a9eabde1b69cf68629bd7055eca481d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496\"" Dec 16 12:27:18.930547 containerd[2000]: time="2025-12-16T12:27:18.929673149Z" level=info msg="StartContainer for \"e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496\"" Dec 16 12:27:18.931444 containerd[2000]: time="2025-12-16T12:27:18.931387541Z" level=info msg="connecting to shim e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496" address="unix:///run/containerd/s/2662cae030359e36bacc88121a9a81a0efdb92b14003a4ab54c9e450e4cb53cd" protocol=ttrpc version=3 Dec 16 12:27:18.974768 systemd[1]: Started cri-containerd-e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496.scope - libcontainer container e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496. Dec 16 12:27:19.033538 containerd[2000]: time="2025-12-16T12:27:19.033357074Z" level=info msg="StartContainer for \"e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496\" returns successfully" Dec 16 12:27:19.274354 kubelet[3537]: I1216 12:27:19.274247 3537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-b4jvt" podStartSLOduration=1.33295896 podStartE2EDuration="4.274225179s" podCreationTimestamp="2025-12-16 12:27:15 +0000 UTC" firstStartedPulling="2025-12-16 12:27:15.944001878 +0000 UTC m=+7.091192064" lastFinishedPulling="2025-12-16 12:27:18.885268097 +0000 UTC m=+10.032458283" observedRunningTime="2025-12-16 12:27:19.274214523 +0000 UTC m=+10.421404733" watchObservedRunningTime="2025-12-16 12:27:19.274225179 +0000 UTC m=+10.421415389" Dec 16 12:27:28.079186 sudo[2378]: pam_unix(sudo:session): session closed for user root Dec 16 12:27:28.104395 sshd[2377]: Connection closed by 139.178.89.65 port 60998 Dec 16 12:27:28.105512 sshd-session[2374]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:28.115522 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:27:28.117628 systemd[1]: session-9.scope: Consumed 11.333s CPU time, 225.1M memory peak. Dec 16 12:27:28.122191 systemd[1]: sshd@8-172.31.28.27:22-139.178.89.65:60998.service: Deactivated successfully. Dec 16 12:27:28.131154 systemd-logind[1972]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:27:28.136020 systemd-logind[1972]: Removed session 9. Dec 16 12:27:44.163736 systemd[1]: Created slice kubepods-besteffort-pod41117ced_83c7_4efa_b0ee_70f092df907e.slice - libcontainer container kubepods-besteffort-pod41117ced_83c7_4efa_b0ee_70f092df907e.slice. Dec 16 12:27:44.240118 kubelet[3537]: I1216 12:27:44.240043 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41117ced-83c7-4efa-b0ee-70f092df907e-tigera-ca-bundle\") pod \"calico-typha-77b84d78dc-w4drz\" (UID: \"41117ced-83c7-4efa-b0ee-70f092df907e\") " pod="calico-system/calico-typha-77b84d78dc-w4drz" Dec 16 12:27:44.240698 kubelet[3537]: I1216 12:27:44.240123 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41117ced-83c7-4efa-b0ee-70f092df907e-typha-certs\") pod \"calico-typha-77b84d78dc-w4drz\" (UID: \"41117ced-83c7-4efa-b0ee-70f092df907e\") " pod="calico-system/calico-typha-77b84d78dc-w4drz" Dec 16 12:27:44.240698 kubelet[3537]: I1216 12:27:44.240168 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hwhd\" (UniqueName: \"kubernetes.io/projected/41117ced-83c7-4efa-b0ee-70f092df907e-kube-api-access-5hwhd\") pod \"calico-typha-77b84d78dc-w4drz\" (UID: \"41117ced-83c7-4efa-b0ee-70f092df907e\") " pod="calico-system/calico-typha-77b84d78dc-w4drz" Dec 16 12:27:44.391776 systemd[1]: Created slice kubepods-besteffort-pod98d1ccbe_37bf_4d66_8c9c_aa0b5dbd7a19.slice - libcontainer container kubepods-besteffort-pod98d1ccbe_37bf_4d66_8c9c_aa0b5dbd7a19.slice. Dec 16 12:27:44.441997 kubelet[3537]: I1216 12:27:44.441314 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-flexvol-driver-host\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.442282 kubelet[3537]: I1216 12:27:44.442234 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48rp\" (UniqueName: \"kubernetes.io/projected/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-kube-api-access-w48rp\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.442934 kubelet[3537]: I1216 12:27:44.442896 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-policysync\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.443130 kubelet[3537]: I1216 12:27:44.443103 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-var-run-calico\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.443270 kubelet[3537]: I1216 12:27:44.443246 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-xtables-lock\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.443391 kubelet[3537]: I1216 12:27:44.443368 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-cni-net-dir\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.443594 kubelet[3537]: I1216 12:27:44.443568 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-var-lib-calico\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.443767 kubelet[3537]: I1216 12:27:44.443741 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-lib-modules\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.443908 kubelet[3537]: I1216 12:27:44.443877 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-tigera-ca-bundle\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.444047 kubelet[3537]: I1216 12:27:44.444023 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-cni-bin-dir\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.444175 kubelet[3537]: I1216 12:27:44.444152 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-node-certs\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.444345 kubelet[3537]: I1216 12:27:44.444317 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19-cni-log-dir\") pod \"calico-node-2mmbh\" (UID: \"98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19\") " pod="calico-system/calico-node-2mmbh" Dec 16 12:27:44.473119 containerd[2000]: time="2025-12-16T12:27:44.472996012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b84d78dc-w4drz,Uid:41117ced-83c7-4efa-b0ee-70f092df907e,Namespace:calico-system,Attempt:0,}" Dec 16 12:27:44.522859 kubelet[3537]: E1216 12:27:44.520884 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:27:44.537918 containerd[2000]: time="2025-12-16T12:27:44.537781972Z" level=info msg="connecting to shim 42382fa06e65050e65a92d28325762ccb3ed6cc9d5132eb6ac6d3123b80cac7d" address="unix:///run/containerd/s/6c0732fcda821744a5092acaeb76d5fa0e68c70b7ca951fa3b5f868e6f4ce9dc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:44.546490 kubelet[3537]: I1216 12:27:44.544931 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr8d2\" (UniqueName: \"kubernetes.io/projected/de3f24db-d343-45e7-a0cf-74925b070014-kube-api-access-wr8d2\") pod \"csi-node-driver-z7gkl\" (UID: \"de3f24db-d343-45e7-a0cf-74925b070014\") " pod="calico-system/csi-node-driver-z7gkl" Dec 16 12:27:44.546490 kubelet[3537]: I1216 12:27:44.545052 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/de3f24db-d343-45e7-a0cf-74925b070014-varrun\") pod \"csi-node-driver-z7gkl\" (UID: \"de3f24db-d343-45e7-a0cf-74925b070014\") " pod="calico-system/csi-node-driver-z7gkl" Dec 16 12:27:44.546490 kubelet[3537]: I1216 12:27:44.545191 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/de3f24db-d343-45e7-a0cf-74925b070014-registration-dir\") pod \"csi-node-driver-z7gkl\" (UID: \"de3f24db-d343-45e7-a0cf-74925b070014\") " pod="calico-system/csi-node-driver-z7gkl" Dec 16 12:27:44.546490 kubelet[3537]: I1216 12:27:44.545282 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de3f24db-d343-45e7-a0cf-74925b070014-kubelet-dir\") pod \"csi-node-driver-z7gkl\" (UID: \"de3f24db-d343-45e7-a0cf-74925b070014\") " pod="calico-system/csi-node-driver-z7gkl" Dec 16 12:27:44.546490 kubelet[3537]: I1216 12:27:44.545316 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/de3f24db-d343-45e7-a0cf-74925b070014-socket-dir\") pod \"csi-node-driver-z7gkl\" (UID: \"de3f24db-d343-45e7-a0cf-74925b070014\") " pod="calico-system/csi-node-driver-z7gkl" Dec 16 12:27:44.555553 kubelet[3537]: E1216 12:27:44.554546 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.555721 kubelet[3537]: W1216 12:27:44.555549 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.555721 kubelet[3537]: E1216 12:27:44.555712 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.560351 kubelet[3537]: E1216 12:27:44.560301 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.560351 kubelet[3537]: W1216 12:27:44.560341 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.561832 kubelet[3537]: E1216 12:27:44.561561 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.568276 kubelet[3537]: E1216 12:27:44.568226 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.568427 kubelet[3537]: W1216 12:27:44.568265 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.568427 kubelet[3537]: E1216 12:27:44.568413 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.571910 kubelet[3537]: E1216 12:27:44.569853 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.571910 kubelet[3537]: W1216 12:27:44.571576 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.571910 kubelet[3537]: E1216 12:27:44.571611 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.572849 kubelet[3537]: E1216 12:27:44.572706 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.572849 kubelet[3537]: W1216 12:27:44.572748 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.572849 kubelet[3537]: E1216 12:27:44.572780 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.577497 kubelet[3537]: E1216 12:27:44.574396 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.578819 kubelet[3537]: W1216 12:27:44.574435 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.581416 kubelet[3537]: E1216 12:27:44.580645 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.589295 kubelet[3537]: E1216 12:27:44.585430 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.589295 kubelet[3537]: W1216 12:27:44.587622 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.589295 kubelet[3537]: E1216 12:27:44.587670 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.590585 kubelet[3537]: E1216 12:27:44.590554 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.591432 kubelet[3537]: W1216 12:27:44.591385 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.595639 kubelet[3537]: E1216 12:27:44.594149 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.599665 kubelet[3537]: E1216 12:27:44.599521 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.600894 kubelet[3537]: W1216 12:27:44.600733 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.600894 kubelet[3537]: E1216 12:27:44.600894 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.603007 kubelet[3537]: E1216 12:27:44.602953 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.603858 kubelet[3537]: W1216 12:27:44.602994 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.603858 kubelet[3537]: E1216 12:27:44.603754 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.607769 kubelet[3537]: E1216 12:27:44.607687 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.608358 kubelet[3537]: W1216 12:27:44.608218 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.608541 kubelet[3537]: E1216 12:27:44.608421 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.611647 kubelet[3537]: E1216 12:27:44.611578 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.611845 kubelet[3537]: W1216 12:27:44.611743 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.611845 kubelet[3537]: E1216 12:27:44.611784 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.614557 kubelet[3537]: E1216 12:27:44.613799 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.614729 kubelet[3537]: W1216 12:27:44.614630 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.614919 kubelet[3537]: E1216 12:27:44.614875 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.618249 kubelet[3537]: E1216 12:27:44.618190 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.618531 kubelet[3537]: W1216 12:27:44.618230 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.618637 kubelet[3537]: E1216 12:27:44.618555 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.623589 kubelet[3537]: E1216 12:27:44.622574 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.624568 kubelet[3537]: W1216 12:27:44.624521 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.626861 kubelet[3537]: E1216 12:27:44.626775 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.627572 kubelet[3537]: E1216 12:27:44.627503 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.627572 kubelet[3537]: W1216 12:27:44.627537 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.627744 kubelet[3537]: E1216 12:27:44.627588 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.631894 kubelet[3537]: E1216 12:27:44.631838 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.632026 kubelet[3537]: W1216 12:27:44.631935 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.632026 kubelet[3537]: E1216 12:27:44.631970 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.636346 kubelet[3537]: E1216 12:27:44.635872 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.636346 kubelet[3537]: W1216 12:27:44.635914 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.636346 kubelet[3537]: E1216 12:27:44.635947 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.638408 kubelet[3537]: E1216 12:27:44.638340 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.638561 kubelet[3537]: W1216 12:27:44.638380 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.638561 kubelet[3537]: E1216 12:27:44.638523 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.640687 kubelet[3537]: E1216 12:27:44.640565 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.640687 kubelet[3537]: W1216 12:27:44.640629 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.640687 kubelet[3537]: E1216 12:27:44.640663 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.641415 kubelet[3537]: E1216 12:27:44.641317 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.641415 kubelet[3537]: W1216 12:27:44.641373 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.641571 kubelet[3537]: E1216 12:27:44.641401 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.645034 kubelet[3537]: E1216 12:27:44.644978 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.645166 kubelet[3537]: W1216 12:27:44.645144 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.645388 kubelet[3537]: E1216 12:27:44.645181 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.649076 kubelet[3537]: E1216 12:27:44.649021 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.649076 kubelet[3537]: W1216 12:27:44.649083 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.649544 kubelet[3537]: E1216 12:27:44.649116 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.649965 kubelet[3537]: E1216 12:27:44.649794 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.649965 kubelet[3537]: W1216 12:27:44.649848 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.649965 kubelet[3537]: E1216 12:27:44.649876 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.651885 kubelet[3537]: E1216 12:27:44.651755 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.651885 kubelet[3537]: W1216 12:27:44.651818 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.651885 kubelet[3537]: E1216 12:27:44.651854 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.652563 kubelet[3537]: E1216 12:27:44.652498 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.652563 kubelet[3537]: W1216 12:27:44.652530 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.652727 kubelet[3537]: E1216 12:27:44.652579 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.654659 kubelet[3537]: E1216 12:27:44.654529 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.654659 kubelet[3537]: W1216 12:27:44.654605 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.654659 kubelet[3537]: E1216 12:27:44.654643 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.655390 kubelet[3537]: E1216 12:27:44.655313 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.655390 kubelet[3537]: W1216 12:27:44.655346 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.655580 kubelet[3537]: E1216 12:27:44.655401 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.657756 kubelet[3537]: E1216 12:27:44.657697 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.657756 kubelet[3537]: W1216 12:27:44.657740 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.658085 kubelet[3537]: E1216 12:27:44.657774 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.658481 kubelet[3537]: E1216 12:27:44.658226 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.658481 kubelet[3537]: W1216 12:27:44.658255 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.658481 kubelet[3537]: E1216 12:27:44.658278 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.658869 kubelet[3537]: E1216 12:27:44.658660 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.658869 kubelet[3537]: W1216 12:27:44.658688 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.658869 kubelet[3537]: E1216 12:27:44.658710 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.659193 kubelet[3537]: E1216 12:27:44.659074 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.659193 kubelet[3537]: W1216 12:27:44.659092 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.659193 kubelet[3537]: E1216 12:27:44.659113 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.660108 kubelet[3537]: E1216 12:27:44.659907 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.660108 kubelet[3537]: W1216 12:27:44.659940 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.660108 kubelet[3537]: E1216 12:27:44.659967 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.661595 kubelet[3537]: E1216 12:27:44.661546 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.661595 kubelet[3537]: W1216 12:27:44.661585 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.661811 kubelet[3537]: E1216 12:27:44.661617 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.662125 kubelet[3537]: E1216 12:27:44.662084 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.662125 kubelet[3537]: W1216 12:27:44.662122 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.662265 kubelet[3537]: E1216 12:27:44.662149 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.663777 kubelet[3537]: E1216 12:27:44.663721 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.663777 kubelet[3537]: W1216 12:27:44.663761 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.664061 kubelet[3537]: E1216 12:27:44.663793 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.665255 kubelet[3537]: E1216 12:27:44.664838 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.665255 kubelet[3537]: W1216 12:27:44.664875 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.665255 kubelet[3537]: E1216 12:27:44.664905 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.665540 kubelet[3537]: E1216 12:27:44.665307 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.665540 kubelet[3537]: W1216 12:27:44.665325 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.665540 kubelet[3537]: E1216 12:27:44.665347 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.666693 kubelet[3537]: E1216 12:27:44.666638 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.666693 kubelet[3537]: W1216 12:27:44.666684 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.666916 kubelet[3537]: E1216 12:27:44.666717 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.667272 kubelet[3537]: E1216 12:27:44.667223 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.667272 kubelet[3537]: W1216 12:27:44.667259 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.667432 kubelet[3537]: E1216 12:27:44.667287 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.669103 kubelet[3537]: E1216 12:27:44.669049 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.669103 kubelet[3537]: W1216 12:27:44.669088 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.669320 kubelet[3537]: E1216 12:27:44.669121 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.670727 kubelet[3537]: E1216 12:27:44.670670 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.670727 kubelet[3537]: W1216 12:27:44.670712 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.671185 kubelet[3537]: E1216 12:27:44.670747 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.671389 kubelet[3537]: E1216 12:27:44.671349 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.671389 kubelet[3537]: W1216 12:27:44.671382 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.671562 kubelet[3537]: E1216 12:27:44.671414 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.674245 kubelet[3537]: E1216 12:27:44.674192 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.674245 kubelet[3537]: W1216 12:27:44.674231 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.676562 kubelet[3537]: E1216 12:27:44.674265 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.676562 kubelet[3537]: E1216 12:27:44.674744 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.676562 kubelet[3537]: W1216 12:27:44.674767 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.676562 kubelet[3537]: E1216 12:27:44.674791 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.676562 kubelet[3537]: E1216 12:27:44.675439 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.676562 kubelet[3537]: W1216 12:27:44.675525 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.676562 kubelet[3537]: E1216 12:27:44.675557 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.677042 kubelet[3537]: E1216 12:27:44.676693 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.677042 kubelet[3537]: W1216 12:27:44.676720 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.677042 kubelet[3537]: E1216 12:27:44.676750 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.679229 kubelet[3537]: E1216 12:27:44.679167 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.679229 kubelet[3537]: W1216 12:27:44.679212 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.679442 kubelet[3537]: E1216 12:27:44.679245 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.681823 kubelet[3537]: E1216 12:27:44.681763 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.681823 kubelet[3537]: W1216 12:27:44.681805 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.682004 kubelet[3537]: E1216 12:27:44.681841 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.682717 kubelet[3537]: E1216 12:27:44.682670 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.682717 kubelet[3537]: W1216 12:27:44.682707 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.682935 kubelet[3537]: E1216 12:27:44.682741 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.686501 kubelet[3537]: E1216 12:27:44.685836 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.686501 kubelet[3537]: W1216 12:27:44.685876 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.686501 kubelet[3537]: E1216 12:27:44.685910 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.687859 kubelet[3537]: E1216 12:27:44.687806 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.687859 kubelet[3537]: W1216 12:27:44.687846 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.688166 kubelet[3537]: E1216 12:27:44.687882 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.705029 containerd[2000]: time="2025-12-16T12:27:44.704882009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2mmbh,Uid:98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19,Namespace:calico-system,Attempt:0,}" Dec 16 12:27:44.721031 systemd[1]: Started cri-containerd-42382fa06e65050e65a92d28325762ccb3ed6cc9d5132eb6ac6d3123b80cac7d.scope - libcontainer container 42382fa06e65050e65a92d28325762ccb3ed6cc9d5132eb6ac6d3123b80cac7d. Dec 16 12:27:44.775644 containerd[2000]: time="2025-12-16T12:27:44.775578605Z" level=info msg="connecting to shim c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0" address="unix:///run/containerd/s/b3b90c518b97802776e45a80ea858057a5e3e18d6bc84c37d6b5549bcded3843" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:44.801545 kubelet[3537]: E1216 12:27:44.799838 3537 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:27:44.801545 kubelet[3537]: W1216 12:27:44.800552 3537 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:27:44.801545 kubelet[3537]: E1216 12:27:44.800643 3537 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:27:44.870681 systemd[1]: Started cri-containerd-c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0.scope - libcontainer container c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0. Dec 16 12:27:45.189838 containerd[2000]: time="2025-12-16T12:27:45.189634815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2mmbh,Uid:98d1ccbe-37bf-4d66-8c9c-aa0b5dbd7a19,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0\"" Dec 16 12:27:45.195280 containerd[2000]: time="2025-12-16T12:27:45.194652603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:27:45.259419 containerd[2000]: time="2025-12-16T12:27:45.259238908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77b84d78dc-w4drz,Uid:41117ced-83c7-4efa-b0ee-70f092df907e,Namespace:calico-system,Attempt:0,} returns sandbox id \"42382fa06e65050e65a92d28325762ccb3ed6cc9d5132eb6ac6d3123b80cac7d\"" Dec 16 12:27:46.169435 kubelet[3537]: E1216 12:27:46.169336 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:27:46.814243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3299271314.mount: Deactivated successfully. Dec 16 12:27:46.964537 containerd[2000]: time="2025-12-16T12:27:46.963890648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:46.966618 containerd[2000]: time="2025-12-16T12:27:46.966296240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Dec 16 12:27:46.968897 containerd[2000]: time="2025-12-16T12:27:46.968838116Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:46.973241 containerd[2000]: time="2025-12-16T12:27:46.973195160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:46.974502 containerd[2000]: time="2025-12-16T12:27:46.974347160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.779637293s" Dec 16 12:27:46.974502 containerd[2000]: time="2025-12-16T12:27:46.974402732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:27:46.977123 containerd[2000]: time="2025-12-16T12:27:46.976704140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:27:46.984526 containerd[2000]: time="2025-12-16T12:27:46.984120008Z" level=info msg="CreateContainer within sandbox \"c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:27:47.005035 containerd[2000]: time="2025-12-16T12:27:47.004984588Z" level=info msg="Container d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:47.015033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381007220.mount: Deactivated successfully. Dec 16 12:27:47.025590 containerd[2000]: time="2025-12-16T12:27:47.025436393Z" level=info msg="CreateContainer within sandbox \"c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596\"" Dec 16 12:27:47.027500 containerd[2000]: time="2025-12-16T12:27:47.026642201Z" level=info msg="StartContainer for \"d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596\"" Dec 16 12:27:47.031238 containerd[2000]: time="2025-12-16T12:27:47.031157801Z" level=info msg="connecting to shim d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596" address="unix:///run/containerd/s/b3b90c518b97802776e45a80ea858057a5e3e18d6bc84c37d6b5549bcded3843" protocol=ttrpc version=3 Dec 16 12:27:47.069807 systemd[1]: Started cri-containerd-d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596.scope - libcontainer container d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596. Dec 16 12:27:47.200361 containerd[2000]: time="2025-12-16T12:27:47.199848041Z" level=info msg="StartContainer for \"d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596\" returns successfully" Dec 16 12:27:47.228581 systemd[1]: cri-containerd-d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596.scope: Deactivated successfully. Dec 16 12:27:47.236046 containerd[2000]: time="2025-12-16T12:27:47.235845582Z" level=info msg="received container exit event container_id:\"d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596\" id:\"d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596\" pid:4131 exited_at:{seconds:1765888067 nanos:235264830}" Dec 16 12:27:47.762780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3745ca2fba71532ac17f669e183765459b2f6c51af9dca893076d6b1f56c596-rootfs.mount: Deactivated successfully. Dec 16 12:27:48.169549 kubelet[3537]: E1216 12:27:48.169417 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:27:49.282284 containerd[2000]: time="2025-12-16T12:27:49.281872688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:49.286167 containerd[2000]: time="2025-12-16T12:27:49.286066016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Dec 16 12:27:49.288346 containerd[2000]: time="2025-12-16T12:27:49.288169016Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:49.298300 containerd[2000]: time="2025-12-16T12:27:49.297705548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:49.302132 containerd[2000]: time="2025-12-16T12:27:49.302079152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.325313356s" Dec 16 12:27:49.302338 containerd[2000]: time="2025-12-16T12:27:49.302310320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:27:49.304949 containerd[2000]: time="2025-12-16T12:27:49.304227608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:27:49.338237 containerd[2000]: time="2025-12-16T12:27:49.338186060Z" level=info msg="CreateContainer within sandbox \"42382fa06e65050e65a92d28325762ccb3ed6cc9d5132eb6ac6d3123b80cac7d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:27:49.354749 containerd[2000]: time="2025-12-16T12:27:49.354690992Z" level=info msg="Container 6b9fa679334cc24f5c0f6361e89ba941ce380fb386ce7330df0ee8a52425b986: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:49.376640 containerd[2000]: time="2025-12-16T12:27:49.376588712Z" level=info msg="CreateContainer within sandbox \"42382fa06e65050e65a92d28325762ccb3ed6cc9d5132eb6ac6d3123b80cac7d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6b9fa679334cc24f5c0f6361e89ba941ce380fb386ce7330df0ee8a52425b986\"" Dec 16 12:27:49.378315 containerd[2000]: time="2025-12-16T12:27:49.378245912Z" level=info msg="StartContainer for \"6b9fa679334cc24f5c0f6361e89ba941ce380fb386ce7330df0ee8a52425b986\"" Dec 16 12:27:49.382352 containerd[2000]: time="2025-12-16T12:27:49.382275668Z" level=info msg="connecting to shim 6b9fa679334cc24f5c0f6361e89ba941ce380fb386ce7330df0ee8a52425b986" address="unix:///run/containerd/s/6c0732fcda821744a5092acaeb76d5fa0e68c70b7ca951fa3b5f868e6f4ce9dc" protocol=ttrpc version=3 Dec 16 12:27:49.432769 systemd[1]: Started cri-containerd-6b9fa679334cc24f5c0f6361e89ba941ce380fb386ce7330df0ee8a52425b986.scope - libcontainer container 6b9fa679334cc24f5c0f6361e89ba941ce380fb386ce7330df0ee8a52425b986. Dec 16 12:27:49.521235 containerd[2000]: time="2025-12-16T12:27:49.521155065Z" level=info msg="StartContainer for \"6b9fa679334cc24f5c0f6361e89ba941ce380fb386ce7330df0ee8a52425b986\" returns successfully" Dec 16 12:27:50.168981 kubelet[3537]: E1216 12:27:50.168894 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:27:50.440730 kubelet[3537]: I1216 12:27:50.440505 3537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77b84d78dc-w4drz" podStartSLOduration=2.400474026 podStartE2EDuration="6.44047945s" podCreationTimestamp="2025-12-16 12:27:44 +0000 UTC" firstStartedPulling="2025-12-16 12:27:45.263967136 +0000 UTC m=+36.411157334" lastFinishedPulling="2025-12-16 12:27:49.30397256 +0000 UTC m=+40.451162758" observedRunningTime="2025-12-16 12:27:50.440395078 +0000 UTC m=+41.587585300" watchObservedRunningTime="2025-12-16 12:27:50.44047945 +0000 UTC m=+41.587669672" Dec 16 12:27:52.170392 kubelet[3537]: E1216 12:27:52.169519 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:27:52.784508 containerd[2000]: time="2025-12-16T12:27:52.783716329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:52.785961 containerd[2000]: time="2025-12-16T12:27:52.785876557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 16 12:27:52.788125 containerd[2000]: time="2025-12-16T12:27:52.788042341Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:52.794485 containerd[2000]: time="2025-12-16T12:27:52.794123533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:52.797227 containerd[2000]: time="2025-12-16T12:27:52.797130913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.492846737s" Dec 16 12:27:52.797361 containerd[2000]: time="2025-12-16T12:27:52.797229553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:27:52.807583 containerd[2000]: time="2025-12-16T12:27:52.807094081Z" level=info msg="CreateContainer within sandbox \"c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:27:52.833097 containerd[2000]: time="2025-12-16T12:27:52.833037925Z" level=info msg="Container a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:52.855537 containerd[2000]: time="2025-12-16T12:27:52.855423014Z" level=info msg="CreateContainer within sandbox \"c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc\"" Dec 16 12:27:52.858226 containerd[2000]: time="2025-12-16T12:27:52.858165398Z" level=info msg="StartContainer for \"a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc\"" Dec 16 12:27:52.862564 containerd[2000]: time="2025-12-16T12:27:52.862419278Z" level=info msg="connecting to shim a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc" address="unix:///run/containerd/s/b3b90c518b97802776e45a80ea858057a5e3e18d6bc84c37d6b5549bcded3843" protocol=ttrpc version=3 Dec 16 12:27:52.920761 systemd[1]: Started cri-containerd-a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc.scope - libcontainer container a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc. Dec 16 12:27:53.044015 containerd[2000]: time="2025-12-16T12:27:53.043853626Z" level=info msg="StartContainer for \"a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc\" returns successfully" Dec 16 12:27:53.971329 containerd[2000]: time="2025-12-16T12:27:53.971202291Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:27:53.976704 systemd[1]: cri-containerd-a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc.scope: Deactivated successfully. Dec 16 12:27:53.977242 systemd[1]: cri-containerd-a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc.scope: Consumed 917ms CPU time, 189.5M memory peak, 165.9M written to disk. Dec 16 12:27:53.981938 containerd[2000]: time="2025-12-16T12:27:53.981810519Z" level=info msg="received container exit event container_id:\"a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc\" id:\"a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc\" pid:4237 exited_at:{seconds:1765888073 nanos:981219579}" Dec 16 12:27:54.022855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8ecee6bd8d43b7151e4879d7d8afdd3ad7a94f654619b6f3f236b5538a216bc-rootfs.mount: Deactivated successfully. Dec 16 12:27:54.057878 kubelet[3537]: I1216 12:27:54.057839 3537 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:27:54.139162 systemd[1]: Created slice kubepods-burstable-pod98415e12_7b72_4a86_b95c_2c6f6d6cfcd8.slice - libcontainer container kubepods-burstable-pod98415e12_7b72_4a86_b95c_2c6f6d6cfcd8.slice. Dec 16 12:27:54.162260 systemd[1]: Created slice kubepods-burstable-pod5fdee508_56a5_4d15_8a4a_c7c165668cad.slice - libcontainer container kubepods-burstable-pod5fdee508_56a5_4d15_8a4a_c7c165668cad.slice. Dec 16 12:27:54.246118 systemd[1]: Created slice kubepods-besteffort-podde3f24db_d343_45e7_a0cf_74925b070014.slice - libcontainer container kubepods-besteffort-podde3f24db_d343_45e7_a0cf_74925b070014.slice. Dec 16 12:27:54.246748 kubelet[3537]: I1216 12:27:54.246641 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhbqp\" (UniqueName: \"kubernetes.io/projected/5fdee508-56a5-4d15-8a4a-c7c165668cad-kube-api-access-xhbqp\") pod \"coredns-674b8bbfcf-sxz7j\" (UID: \"5fdee508-56a5-4d15-8a4a-c7c165668cad\") " pod="kube-system/coredns-674b8bbfcf-sxz7j" Dec 16 12:27:54.246748 kubelet[3537]: I1216 12:27:54.246720 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98415e12-7b72-4a86-b95c-2c6f6d6cfcd8-config-volume\") pod \"coredns-674b8bbfcf-smgbl\" (UID: \"98415e12-7b72-4a86-b95c-2c6f6d6cfcd8\") " pod="kube-system/coredns-674b8bbfcf-smgbl" Dec 16 12:27:54.246942 kubelet[3537]: I1216 12:27:54.246760 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5z8\" (UniqueName: \"kubernetes.io/projected/98415e12-7b72-4a86-b95c-2c6f6d6cfcd8-kube-api-access-lp5z8\") pod \"coredns-674b8bbfcf-smgbl\" (UID: \"98415e12-7b72-4a86-b95c-2c6f6d6cfcd8\") " pod="kube-system/coredns-674b8bbfcf-smgbl" Dec 16 12:27:54.246942 kubelet[3537]: I1216 12:27:54.246823 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fdee508-56a5-4d15-8a4a-c7c165668cad-config-volume\") pod \"coredns-674b8bbfcf-sxz7j\" (UID: \"5fdee508-56a5-4d15-8a4a-c7c165668cad\") " pod="kube-system/coredns-674b8bbfcf-sxz7j" Dec 16 12:27:54.252728 containerd[2000]: time="2025-12-16T12:27:54.252669432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7gkl,Uid:de3f24db-d343-45e7-a0cf-74925b070014,Namespace:calico-system,Attempt:0,}" Dec 16 12:27:54.333978 systemd[1]: Created slice kubepods-besteffort-pod76e6c14e_6dea_41f8_8e8a_730830194387.slice - libcontainer container kubepods-besteffort-pod76e6c14e_6dea_41f8_8e8a_730830194387.slice. Dec 16 12:27:54.347383 kubelet[3537]: I1216 12:27:54.347278 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsrd2\" (UniqueName: \"kubernetes.io/projected/76e6c14e-6dea-41f8-8e8a-730830194387-kube-api-access-jsrd2\") pod \"calico-apiserver-757bdf8b44-h2nb9\" (UID: \"76e6c14e-6dea-41f8-8e8a-730830194387\") " pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" Dec 16 12:27:54.349673 kubelet[3537]: I1216 12:27:54.349588 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbpd9\" (UniqueName: \"kubernetes.io/projected/c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49-kube-api-access-bbpd9\") pod \"calico-kube-controllers-64f7b777d7-gkwp7\" (UID: \"c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49\") " pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" Dec 16 12:27:54.349943 kubelet[3537]: I1216 12:27:54.349725 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49-tigera-ca-bundle\") pod \"calico-kube-controllers-64f7b777d7-gkwp7\" (UID: \"c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49\") " pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" Dec 16 12:27:54.349943 kubelet[3537]: I1216 12:27:54.349778 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/76e6c14e-6dea-41f8-8e8a-730830194387-calico-apiserver-certs\") pod \"calico-apiserver-757bdf8b44-h2nb9\" (UID: \"76e6c14e-6dea-41f8-8e8a-730830194387\") " pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" Dec 16 12:27:54.365543 systemd[1]: Created slice kubepods-besteffort-poda301cdcf_9f24_4b62_9c32_ae5e7ca3de08.slice - libcontainer container kubepods-besteffort-poda301cdcf_9f24_4b62_9c32_ae5e7ca3de08.slice. Dec 16 12:27:54.400075 systemd[1]: Created slice kubepods-besteffort-podc0fcc3a9_bcc7_40dd_851f_34cdc70e8f49.slice - libcontainer container kubepods-besteffort-podc0fcc3a9_bcc7_40dd_851f_34cdc70e8f49.slice. Dec 16 12:27:54.452134 kubelet[3537]: I1216 12:27:54.452066 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a301cdcf-9f24-4b62-9c32-ae5e7ca3de08-calico-apiserver-certs\") pod \"calico-apiserver-757bdf8b44-9gjd2\" (UID: \"a301cdcf-9f24-4b62-9c32-ae5e7ca3de08\") " pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" Dec 16 12:27:54.452305 kubelet[3537]: I1216 12:27:54.452145 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38be35b6-1ece-438f-9d8c-c1adab6b46e8-whisker-backend-key-pair\") pod \"whisker-78954f85db-9vp58\" (UID: \"38be35b6-1ece-438f-9d8c-c1adab6b46e8\") " pod="calico-system/whisker-78954f85db-9vp58" Dec 16 12:27:54.452305 kubelet[3537]: I1216 12:27:54.452211 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h445\" (UniqueName: \"kubernetes.io/projected/38be35b6-1ece-438f-9d8c-c1adab6b46e8-kube-api-access-6h445\") pod \"whisker-78954f85db-9vp58\" (UID: \"38be35b6-1ece-438f-9d8c-c1adab6b46e8\") " pod="calico-system/whisker-78954f85db-9vp58" Dec 16 12:27:54.452305 kubelet[3537]: I1216 12:27:54.452260 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvxjw\" (UniqueName: \"kubernetes.io/projected/a301cdcf-9f24-4b62-9c32-ae5e7ca3de08-kube-api-access-qvxjw\") pod \"calico-apiserver-757bdf8b44-9gjd2\" (UID: \"a301cdcf-9f24-4b62-9c32-ae5e7ca3de08\") " pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" Dec 16 12:27:54.454838 kubelet[3537]: I1216 12:27:54.452318 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38be35b6-1ece-438f-9d8c-c1adab6b46e8-whisker-ca-bundle\") pod \"whisker-78954f85db-9vp58\" (UID: \"38be35b6-1ece-438f-9d8c-c1adab6b46e8\") " pod="calico-system/whisker-78954f85db-9vp58" Dec 16 12:27:54.480167 containerd[2000]: time="2025-12-16T12:27:54.479756234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sxz7j,Uid:5fdee508-56a5-4d15-8a4a-c7c165668cad,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:54.481595 containerd[2000]: time="2025-12-16T12:27:54.480418886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smgbl,Uid:98415e12-7b72-4a86-b95c-2c6f6d6cfcd8,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:54.520499 systemd[1]: Created slice kubepods-besteffort-pod1a7a3ca4_b553_4916_9cf0_5a9aaa1485e7.slice - libcontainer container kubepods-besteffort-pod1a7a3ca4_b553_4916_9cf0_5a9aaa1485e7.slice. Dec 16 12:27:54.555815 systemd[1]: Created slice kubepods-besteffort-pod38be35b6_1ece_438f_9d8c_c1adab6b46e8.slice - libcontainer container kubepods-besteffort-pod38be35b6_1ece_438f_9d8c_c1adab6b46e8.slice. Dec 16 12:27:54.558127 kubelet[3537]: I1216 12:27:54.558071 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7-config\") pod \"goldmane-666569f655-nv4z4\" (UID: \"1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7\") " pod="calico-system/goldmane-666569f655-nv4z4" Dec 16 12:27:54.562391 kubelet[3537]: I1216 12:27:54.562319 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7-goldmane-ca-bundle\") pod \"goldmane-666569f655-nv4z4\" (UID: \"1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7\") " pod="calico-system/goldmane-666569f655-nv4z4" Dec 16 12:27:54.562391 kubelet[3537]: I1216 12:27:54.562392 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7-goldmane-key-pair\") pod \"goldmane-666569f655-nv4z4\" (UID: \"1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7\") " pod="calico-system/goldmane-666569f655-nv4z4" Dec 16 12:27:54.562954 kubelet[3537]: I1216 12:27:54.562482 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jql9v\" (UniqueName: \"kubernetes.io/projected/1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7-kube-api-access-jql9v\") pod \"goldmane-666569f655-nv4z4\" (UID: \"1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7\") " pod="calico-system/goldmane-666569f655-nv4z4" Dec 16 12:27:54.683109 containerd[2000]: time="2025-12-16T12:27:54.682979991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-757bdf8b44-h2nb9,Uid:76e6c14e-6dea-41f8-8e8a-730830194387,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:27:54.683728 containerd[2000]: time="2025-12-16T12:27:54.683409591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-757bdf8b44-9gjd2,Uid:a301cdcf-9f24-4b62-9c32-ae5e7ca3de08,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:27:54.690517 containerd[2000]: time="2025-12-16T12:27:54.687693603Z" level=error msg="Failed to destroy network for sandbox \"855edf57d7b9360c19a946e058e56cd784394cc0b61d875625abe85595d6acdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:54.735717 containerd[2000]: time="2025-12-16T12:27:54.735586383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f7b777d7-gkwp7,Uid:c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49,Namespace:calico-system,Attempt:0,}" Dec 16 12:27:54.839590 containerd[2000]: time="2025-12-16T12:27:54.838258191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv4z4,Uid:1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7,Namespace:calico-system,Attempt:0,}" Dec 16 12:27:54.893579 containerd[2000]: time="2025-12-16T12:27:54.893471068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78954f85db-9vp58,Uid:38be35b6-1ece-438f-9d8c-c1adab6b46e8,Namespace:calico-system,Attempt:0,}" Dec 16 12:27:54.955503 containerd[2000]: time="2025-12-16T12:27:54.955212556Z" level=error msg="Failed to destroy network for sandbox \"f05088d9410d7bcdef14d164dd130cd48c0da6647be7aa552db61ec76ea2ced0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.042325 systemd[1]: run-netns-cni\x2d3fade796\x2dd13c\x2d0726\x2dff97\x2d4c628ce28d7f.mount: Deactivated successfully. Dec 16 12:27:55.147012 containerd[2000]: time="2025-12-16T12:27:55.146851165Z" level=error msg="Failed to destroy network for sandbox \"34a14982d28a3e4444ec8de5da3a67d614de3cecb6672e5e84b492ee75b7dbbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.150722 containerd[2000]: time="2025-12-16T12:27:55.148623805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7gkl,Uid:de3f24db-d343-45e7-a0cf-74925b070014,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"855edf57d7b9360c19a946e058e56cd784394cc0b61d875625abe85595d6acdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.150897 kubelet[3537]: E1216 12:27:55.148932 3537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"855edf57d7b9360c19a946e058e56cd784394cc0b61d875625abe85595d6acdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.150897 kubelet[3537]: E1216 12:27:55.149024 3537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"855edf57d7b9360c19a946e058e56cd784394cc0b61d875625abe85595d6acdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7gkl" Dec 16 12:27:55.150897 kubelet[3537]: E1216 12:27:55.149067 3537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"855edf57d7b9360c19a946e058e56cd784394cc0b61d875625abe85595d6acdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7gkl" Dec 16 12:27:55.154313 kubelet[3537]: E1216 12:27:55.149140 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"855edf57d7b9360c19a946e058e56cd784394cc0b61d875625abe85595d6acdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:27:55.154370 systemd[1]: run-netns-cni\x2d292fca3a\x2d8c74\x2df23f\x2dddcc\x2db890bed90d78.mount: Deactivated successfully. Dec 16 12:27:55.237170 containerd[2000]: time="2025-12-16T12:27:55.237071557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sxz7j,Uid:5fdee508-56a5-4d15-8a4a-c7c165668cad,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05088d9410d7bcdef14d164dd130cd48c0da6647be7aa552db61ec76ea2ced0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.238196 kubelet[3537]: E1216 12:27:55.237577 3537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05088d9410d7bcdef14d164dd130cd48c0da6647be7aa552db61ec76ea2ced0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.238196 kubelet[3537]: E1216 12:27:55.237652 3537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05088d9410d7bcdef14d164dd130cd48c0da6647be7aa552db61ec76ea2ced0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sxz7j" Dec 16 12:27:55.238196 kubelet[3537]: E1216 12:27:55.237687 3537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f05088d9410d7bcdef14d164dd130cd48c0da6647be7aa552db61ec76ea2ced0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sxz7j" Dec 16 12:27:55.238437 kubelet[3537]: E1216 12:27:55.237760 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sxz7j_kube-system(5fdee508-56a5-4d15-8a4a-c7c165668cad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sxz7j_kube-system(5fdee508-56a5-4d15-8a4a-c7c165668cad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f05088d9410d7bcdef14d164dd130cd48c0da6647be7aa552db61ec76ea2ced0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sxz7j" podUID="5fdee508-56a5-4d15-8a4a-c7c165668cad" Dec 16 12:27:55.251245 containerd[2000]: time="2025-12-16T12:27:55.251064421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-757bdf8b44-h2nb9,Uid:76e6c14e-6dea-41f8-8e8a-730830194387,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a14982d28a3e4444ec8de5da3a67d614de3cecb6672e5e84b492ee75b7dbbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.251931 kubelet[3537]: E1216 12:27:55.251780 3537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a14982d28a3e4444ec8de5da3a67d614de3cecb6672e5e84b492ee75b7dbbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.252050 kubelet[3537]: E1216 12:27:55.251900 3537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a14982d28a3e4444ec8de5da3a67d614de3cecb6672e5e84b492ee75b7dbbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" Dec 16 12:27:55.252127 kubelet[3537]: E1216 12:27:55.252059 3537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34a14982d28a3e4444ec8de5da3a67d614de3cecb6672e5e84b492ee75b7dbbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" Dec 16 12:27:55.252566 kubelet[3537]: E1216 12:27:55.252179 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-757bdf8b44-h2nb9_calico-apiserver(76e6c14e-6dea-41f8-8e8a-730830194387)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-757bdf8b44-h2nb9_calico-apiserver(76e6c14e-6dea-41f8-8e8a-730830194387)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34a14982d28a3e4444ec8de5da3a67d614de3cecb6672e5e84b492ee75b7dbbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:27:55.497842 containerd[2000]: time="2025-12-16T12:27:55.497550687Z" level=error msg="Failed to destroy network for sandbox \"3003a4544d3ed540d28f5c23eabb870408d4aede96f54592e267b28805d33c45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.504960 containerd[2000]: time="2025-12-16T12:27:55.504802527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-757bdf8b44-9gjd2,Uid:a301cdcf-9f24-4b62-9c32-ae5e7ca3de08,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3003a4544d3ed540d28f5c23eabb870408d4aede96f54592e267b28805d33c45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.506296 kubelet[3537]: E1216 12:27:55.505909 3537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3003a4544d3ed540d28f5c23eabb870408d4aede96f54592e267b28805d33c45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.506296 kubelet[3537]: E1216 12:27:55.506011 3537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3003a4544d3ed540d28f5c23eabb870408d4aede96f54592e267b28805d33c45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" Dec 16 12:27:55.506296 kubelet[3537]: E1216 12:27:55.506076 3537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3003a4544d3ed540d28f5c23eabb870408d4aede96f54592e267b28805d33c45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" Dec 16 12:27:55.506609 kubelet[3537]: E1216 12:27:55.506185 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-757bdf8b44-9gjd2_calico-apiserver(a301cdcf-9f24-4b62-9c32-ae5e7ca3de08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-757bdf8b44-9gjd2_calico-apiserver(a301cdcf-9f24-4b62-9c32-ae5e7ca3de08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3003a4544d3ed540d28f5c23eabb870408d4aede96f54592e267b28805d33c45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:27:55.520282 containerd[2000]: time="2025-12-16T12:27:55.520141899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:27:55.534783 containerd[2000]: time="2025-12-16T12:27:55.534566943Z" level=error msg="Failed to destroy network for sandbox \"0ef22712cf36a6927b833b251d9a2d449a5fa3b4761b668c4a23208b92f5018f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.539551 containerd[2000]: time="2025-12-16T12:27:55.539407599Z" level=error msg="Failed to destroy network for sandbox \"2314eade0b815fedd1af2928d38c257296d4554978855b609bd2ce1a6c4e05de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.541160 containerd[2000]: time="2025-12-16T12:27:55.539734167Z" level=error msg="Failed to destroy network for sandbox \"59b1d57c7bdc36391ea50779fc73cbf0557f7f7bf7a37c8d372f17ce35d57612\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.542521 containerd[2000]: time="2025-12-16T12:27:55.542427615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv4z4,Uid:1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ef22712cf36a6927b833b251d9a2d449a5fa3b4761b668c4a23208b92f5018f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.543338 kubelet[3537]: E1216 12:27:55.543136 3537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ef22712cf36a6927b833b251d9a2d449a5fa3b4761b668c4a23208b92f5018f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.543338 kubelet[3537]: E1216 12:27:55.543226 3537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ef22712cf36a6927b833b251d9a2d449a5fa3b4761b668c4a23208b92f5018f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv4z4" Dec 16 12:27:55.543338 kubelet[3537]: E1216 12:27:55.543264 3537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ef22712cf36a6927b833b251d9a2d449a5fa3b4761b668c4a23208b92f5018f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nv4z4" Dec 16 12:27:55.543760 kubelet[3537]: E1216 12:27:55.543349 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nv4z4_calico-system(1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nv4z4_calico-system(1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ef22712cf36a6927b833b251d9a2d449a5fa3b4761b668c4a23208b92f5018f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:27:55.546162 containerd[2000]: time="2025-12-16T12:27:55.545883699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f7b777d7-gkwp7,Uid:c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b1d57c7bdc36391ea50779fc73cbf0557f7f7bf7a37c8d372f17ce35d57612\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.546578 kubelet[3537]: E1216 12:27:55.546533 3537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b1d57c7bdc36391ea50779fc73cbf0557f7f7bf7a37c8d372f17ce35d57612\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.547559 kubelet[3537]: E1216 12:27:55.547500 3537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b1d57c7bdc36391ea50779fc73cbf0557f7f7bf7a37c8d372f17ce35d57612\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" Dec 16 12:27:55.548480 kubelet[3537]: E1216 12:27:55.547783 3537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b1d57c7bdc36391ea50779fc73cbf0557f7f7bf7a37c8d372f17ce35d57612\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" Dec 16 12:27:55.548480 kubelet[3537]: E1216 12:27:55.547882 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64f7b777d7-gkwp7_calico-system(c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64f7b777d7-gkwp7_calico-system(c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59b1d57c7bdc36391ea50779fc73cbf0557f7f7bf7a37c8d372f17ce35d57612\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:27:55.553630 containerd[2000]: time="2025-12-16T12:27:55.553558899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smgbl,Uid:98415e12-7b72-4a86-b95c-2c6f6d6cfcd8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2314eade0b815fedd1af2928d38c257296d4554978855b609bd2ce1a6c4e05de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.554294 kubelet[3537]: E1216 12:27:55.554248 3537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2314eade0b815fedd1af2928d38c257296d4554978855b609bd2ce1a6c4e05de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.555613 kubelet[3537]: E1216 12:27:55.554504 3537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2314eade0b815fedd1af2928d38c257296d4554978855b609bd2ce1a6c4e05de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-smgbl" Dec 16 12:27:55.555613 kubelet[3537]: E1216 12:27:55.554548 3537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2314eade0b815fedd1af2928d38c257296d4554978855b609bd2ce1a6c4e05de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-smgbl" Dec 16 12:27:55.555613 kubelet[3537]: E1216 12:27:55.554623 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-smgbl_kube-system(98415e12-7b72-4a86-b95c-2c6f6d6cfcd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-smgbl_kube-system(98415e12-7b72-4a86-b95c-2c6f6d6cfcd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2314eade0b815fedd1af2928d38c257296d4554978855b609bd2ce1a6c4e05de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-smgbl" podUID="98415e12-7b72-4a86-b95c-2c6f6d6cfcd8" Dec 16 12:27:55.567666 containerd[2000]: time="2025-12-16T12:27:55.567587823Z" level=error msg="Failed to destroy network for sandbox \"f4d89a7d703d0a21afb7e57e7a4c6d0e3127fd76804ccaf596c2b16df9ce85b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.572914 containerd[2000]: time="2025-12-16T12:27:55.572827467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78954f85db-9vp58,Uid:38be35b6-1ece-438f-9d8c-c1adab6b46e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d89a7d703d0a21afb7e57e7a4c6d0e3127fd76804ccaf596c2b16df9ce85b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.574529 kubelet[3537]: E1216 12:27:55.573368 3537 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d89a7d703d0a21afb7e57e7a4c6d0e3127fd76804ccaf596c2b16df9ce85b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:27:55.574777 kubelet[3537]: E1216 12:27:55.574710 3537 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d89a7d703d0a21afb7e57e7a4c6d0e3127fd76804ccaf596c2b16df9ce85b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78954f85db-9vp58" Dec 16 12:27:55.575482 kubelet[3537]: E1216 12:27:55.574965 3537 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4d89a7d703d0a21afb7e57e7a4c6d0e3127fd76804ccaf596c2b16df9ce85b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78954f85db-9vp58" Dec 16 12:27:55.575727 kubelet[3537]: E1216 12:27:55.575680 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78954f85db-9vp58_calico-system(38be35b6-1ece-438f-9d8c-c1adab6b46e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78954f85db-9vp58_calico-system(38be35b6-1ece-438f-9d8c-c1adab6b46e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4d89a7d703d0a21afb7e57e7a4c6d0e3127fd76804ccaf596c2b16df9ce85b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78954f85db-9vp58" podUID="38be35b6-1ece-438f-9d8c-c1adab6b46e8" Dec 16 12:27:56.023254 systemd[1]: run-netns-cni\x2d29a62640\x2d812a\x2dec3c\x2d12ea\x2d4c366f2b9b06.mount: Deactivated successfully. Dec 16 12:27:56.023420 systemd[1]: run-netns-cni\x2d44e1b788\x2dba73\x2d82e1\x2dc36e\x2d9cfa979a8b5a.mount: Deactivated successfully. Dec 16 12:27:56.023595 systemd[1]: run-netns-cni\x2d31c9c292\x2dfb9b\x2d8ed6\x2d0ed3\x2de3ba63fabc29.mount: Deactivated successfully. Dec 16 12:27:56.023732 systemd[1]: run-netns-cni\x2d14bce6d3\x2d5bc2\x2da10a\x2d94e0\x2d087d3b0cd63d.mount: Deactivated successfully. Dec 16 12:28:03.270502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657725243.mount: Deactivated successfully. Dec 16 12:28:03.331786 containerd[2000]: time="2025-12-16T12:28:03.331685974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:03.332542 containerd[2000]: time="2025-12-16T12:28:03.331991722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 16 12:28:03.335243 containerd[2000]: time="2025-12-16T12:28:03.335171026Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:03.342301 containerd[2000]: time="2025-12-16T12:28:03.342210010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:03.347151 containerd[2000]: time="2025-12-16T12:28:03.346956550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.826748423s" Dec 16 12:28:03.347151 containerd[2000]: time="2025-12-16T12:28:03.347025106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:28:03.391637 containerd[2000]: time="2025-12-16T12:28:03.388976890Z" level=info msg="CreateContainer within sandbox \"c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:28:03.425719 containerd[2000]: time="2025-12-16T12:28:03.425659414Z" level=info msg="Container 4ce8843e8805e9404446bbf3ae3be5055c829ba80b47295af76af70f37a24d07: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:03.450288 containerd[2000]: time="2025-12-16T12:28:03.450151030Z" level=info msg="CreateContainer within sandbox \"c8ad67fea2c1883daf883590daf819b691db7aaa19790a53a407a49c1ad9cda0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4ce8843e8805e9404446bbf3ae3be5055c829ba80b47295af76af70f37a24d07\"" Dec 16 12:28:03.452885 containerd[2000]: time="2025-12-16T12:28:03.452771806Z" level=info msg="StartContainer for \"4ce8843e8805e9404446bbf3ae3be5055c829ba80b47295af76af70f37a24d07\"" Dec 16 12:28:03.458296 containerd[2000]: time="2025-12-16T12:28:03.458208634Z" level=info msg="connecting to shim 4ce8843e8805e9404446bbf3ae3be5055c829ba80b47295af76af70f37a24d07" address="unix:///run/containerd/s/b3b90c518b97802776e45a80ea858057a5e3e18d6bc84c37d6b5549bcded3843" protocol=ttrpc version=3 Dec 16 12:28:03.587899 systemd[1]: Started cri-containerd-4ce8843e8805e9404446bbf3ae3be5055c829ba80b47295af76af70f37a24d07.scope - libcontainer container 4ce8843e8805e9404446bbf3ae3be5055c829ba80b47295af76af70f37a24d07. Dec 16 12:28:03.726365 containerd[2000]: time="2025-12-16T12:28:03.726231744Z" level=info msg="StartContainer for \"4ce8843e8805e9404446bbf3ae3be5055c829ba80b47295af76af70f37a24d07\" returns successfully" Dec 16 12:28:03.990360 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:28:03.991032 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:28:04.341815 kubelet[3537]: I1216 12:28:04.341174 3537 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38be35b6-1ece-438f-9d8c-c1adab6b46e8-whisker-backend-key-pair\") pod \"38be35b6-1ece-438f-9d8c-c1adab6b46e8\" (UID: \"38be35b6-1ece-438f-9d8c-c1adab6b46e8\") " Dec 16 12:28:04.341815 kubelet[3537]: I1216 12:28:04.341285 3537 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38be35b6-1ece-438f-9d8c-c1adab6b46e8-whisker-ca-bundle\") pod \"38be35b6-1ece-438f-9d8c-c1adab6b46e8\" (UID: \"38be35b6-1ece-438f-9d8c-c1adab6b46e8\") " Dec 16 12:28:04.341815 kubelet[3537]: I1216 12:28:04.341334 3537 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h445\" (UniqueName: \"kubernetes.io/projected/38be35b6-1ece-438f-9d8c-c1adab6b46e8-kube-api-access-6h445\") pod \"38be35b6-1ece-438f-9d8c-c1adab6b46e8\" (UID: \"38be35b6-1ece-438f-9d8c-c1adab6b46e8\") " Dec 16 12:28:04.344689 kubelet[3537]: I1216 12:28:04.344108 3537 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38be35b6-1ece-438f-9d8c-c1adab6b46e8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "38be35b6-1ece-438f-9d8c-c1adab6b46e8" (UID: "38be35b6-1ece-438f-9d8c-c1adab6b46e8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:28:04.356445 systemd[1]: var-lib-kubelet-pods-38be35b6\x2d1ece\x2d438f\x2d9d8c\x2dc1adab6b46e8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:28:04.362744 kubelet[3537]: I1216 12:28:04.362652 3537 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38be35b6-1ece-438f-9d8c-c1adab6b46e8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "38be35b6-1ece-438f-9d8c-c1adab6b46e8" (UID: "38be35b6-1ece-438f-9d8c-c1adab6b46e8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:28:04.371264 kubelet[3537]: I1216 12:28:04.371159 3537 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38be35b6-1ece-438f-9d8c-c1adab6b46e8-kube-api-access-6h445" (OuterVolumeSpecName: "kube-api-access-6h445") pod "38be35b6-1ece-438f-9d8c-c1adab6b46e8" (UID: "38be35b6-1ece-438f-9d8c-c1adab6b46e8"). InnerVolumeSpecName "kube-api-access-6h445". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:28:04.371947 systemd[1]: var-lib-kubelet-pods-38be35b6\x2d1ece\x2d438f\x2d9d8c\x2dc1adab6b46e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6h445.mount: Deactivated successfully. Dec 16 12:28:04.441876 kubelet[3537]: I1216 12:28:04.441819 3537 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/38be35b6-1ece-438f-9d8c-c1adab6b46e8-whisker-backend-key-pair\") on node \"ip-172-31-28-27\" DevicePath \"\"" Dec 16 12:28:04.442057 kubelet[3537]: I1216 12:28:04.441873 3537 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38be35b6-1ece-438f-9d8c-c1adab6b46e8-whisker-ca-bundle\") on node \"ip-172-31-28-27\" DevicePath \"\"" Dec 16 12:28:04.442057 kubelet[3537]: I1216 12:28:04.441923 3537 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6h445\" (UniqueName: \"kubernetes.io/projected/38be35b6-1ece-438f-9d8c-c1adab6b46e8-kube-api-access-6h445\") on node \"ip-172-31-28-27\" DevicePath \"\"" Dec 16 12:28:04.612000 systemd[1]: Removed slice kubepods-besteffort-pod38be35b6_1ece_438f_9d8c_c1adab6b46e8.slice - libcontainer container kubepods-besteffort-pod38be35b6_1ece_438f_9d8c_c1adab6b46e8.slice. Dec 16 12:28:04.675514 kubelet[3537]: I1216 12:28:04.675038 3537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2mmbh" podStartSLOduration=2.5192360689999997 podStartE2EDuration="20.674986668s" podCreationTimestamp="2025-12-16 12:27:44 +0000 UTC" firstStartedPulling="2025-12-16 12:27:45.192791187 +0000 UTC m=+36.339981385" lastFinishedPulling="2025-12-16 12:28:03.348541786 +0000 UTC m=+54.495731984" observedRunningTime="2025-12-16 12:28:04.674093628 +0000 UTC m=+55.821283826" watchObservedRunningTime="2025-12-16 12:28:04.674986668 +0000 UTC m=+55.822176854" Dec 16 12:28:04.794373 systemd[1]: Created slice kubepods-besteffort-pod75aaea04_37f8_41d2_8060_6e5472e00f96.slice - libcontainer container kubepods-besteffort-pod75aaea04_37f8_41d2_8060_6e5472e00f96.slice. Dec 16 12:28:04.845436 kubelet[3537]: I1216 12:28:04.845330 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/75aaea04-37f8-41d2-8060-6e5472e00f96-whisker-backend-key-pair\") pod \"whisker-c4db486f6-22tfh\" (UID: \"75aaea04-37f8-41d2-8060-6e5472e00f96\") " pod="calico-system/whisker-c4db486f6-22tfh" Dec 16 12:28:04.845624 kubelet[3537]: I1216 12:28:04.845525 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75aaea04-37f8-41d2-8060-6e5472e00f96-whisker-ca-bundle\") pod \"whisker-c4db486f6-22tfh\" (UID: \"75aaea04-37f8-41d2-8060-6e5472e00f96\") " pod="calico-system/whisker-c4db486f6-22tfh" Dec 16 12:28:04.845624 kubelet[3537]: I1216 12:28:04.845591 3537 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25hgd\" (UniqueName: \"kubernetes.io/projected/75aaea04-37f8-41d2-8060-6e5472e00f96-kube-api-access-25hgd\") pod \"whisker-c4db486f6-22tfh\" (UID: \"75aaea04-37f8-41d2-8060-6e5472e00f96\") " pod="calico-system/whisker-c4db486f6-22tfh" Dec 16 12:28:05.103946 containerd[2000]: time="2025-12-16T12:28:05.103827790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c4db486f6-22tfh,Uid:75aaea04-37f8-41d2-8060-6e5472e00f96,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:05.173853 kubelet[3537]: I1216 12:28:05.173788 3537 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38be35b6-1ece-438f-9d8c-c1adab6b46e8" path="/var/lib/kubelet/pods/38be35b6-1ece-438f-9d8c-c1adab6b46e8/volumes" Dec 16 12:28:05.408130 (udev-worker)[4532]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:28:05.412658 systemd-networkd[1865]: cali301e6af2273: Link UP Dec 16 12:28:05.414143 systemd-networkd[1865]: cali301e6af2273: Gained carrier Dec 16 12:28:05.447531 containerd[2000]: 2025-12-16 12:28:05.151 [INFO][4585] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:28:05.447531 containerd[2000]: 2025-12-16 12:28:05.231 [INFO][4585] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0 whisker-c4db486f6- calico-system 75aaea04-37f8-41d2-8060-6e5472e00f96 954 0 2025-12-16 12:28:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c4db486f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-27 whisker-c4db486f6-22tfh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali301e6af2273 [] [] }} ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Namespace="calico-system" Pod="whisker-c4db486f6-22tfh" WorkloadEndpoint="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-" Dec 16 12:28:05.447531 containerd[2000]: 2025-12-16 12:28:05.232 [INFO][4585] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Namespace="calico-system" Pod="whisker-c4db486f6-22tfh" WorkloadEndpoint="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" Dec 16 12:28:05.447531 containerd[2000]: 2025-12-16 12:28:05.322 [INFO][4595] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" HandleID="k8s-pod-network.36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Workload="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.322 [INFO][4595] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" HandleID="k8s-pod-network.36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Workload="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8260), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-27", "pod":"whisker-c4db486f6-22tfh", "timestamp":"2025-12-16 12:28:05.322295639 +0000 UTC"}, Hostname:"ip-172-31-28-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.322 [INFO][4595] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.322 [INFO][4595] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.322 [INFO][4595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-27' Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.339 [INFO][4595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" host="ip-172-31-28-27" Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.348 [INFO][4595] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-27" Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.356 [INFO][4595] ipam/ipam.go 511: Trying affinity for 192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.358 [INFO][4595] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.362 [INFO][4595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:05.448053 containerd[2000]: 2025-12-16 12:28:05.362 [INFO][4595] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.64/26 handle="k8s-pod-network.36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" host="ip-172-31-28-27" Dec 16 12:28:05.449425 containerd[2000]: 2025-12-16 12:28:05.365 [INFO][4595] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e Dec 16 12:28:05.449425 containerd[2000]: 2025-12-16 12:28:05.375 [INFO][4595] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.64/26 handle="k8s-pod-network.36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" host="ip-172-31-28-27" Dec 16 12:28:05.449425 containerd[2000]: 2025-12-16 12:28:05.389 [INFO][4595] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.65/26] block=192.168.29.64/26 handle="k8s-pod-network.36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" host="ip-172-31-28-27" Dec 16 12:28:05.449425 containerd[2000]: 2025-12-16 12:28:05.390 [INFO][4595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.65/26] handle="k8s-pod-network.36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" host="ip-172-31-28-27" Dec 16 12:28:05.449425 containerd[2000]: 2025-12-16 12:28:05.390 [INFO][4595] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:05.449425 containerd[2000]: 2025-12-16 12:28:05.390 [INFO][4595] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.65/26] IPv6=[] ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" HandleID="k8s-pod-network.36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Workload="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" Dec 16 12:28:05.449770 containerd[2000]: 2025-12-16 12:28:05.397 [INFO][4585] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Namespace="calico-system" Pod="whisker-c4db486f6-22tfh" WorkloadEndpoint="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0", GenerateName:"whisker-c4db486f6-", Namespace:"calico-system", SelfLink:"", UID:"75aaea04-37f8-41d2-8060-6e5472e00f96", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c4db486f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"", Pod:"whisker-c4db486f6-22tfh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali301e6af2273", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:05.449770 containerd[2000]: 2025-12-16 12:28:05.397 [INFO][4585] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.65/32] ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Namespace="calico-system" Pod="whisker-c4db486f6-22tfh" WorkloadEndpoint="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" Dec 16 12:28:05.449964 containerd[2000]: 2025-12-16 12:28:05.397 [INFO][4585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali301e6af2273 ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Namespace="calico-system" Pod="whisker-c4db486f6-22tfh" WorkloadEndpoint="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" Dec 16 12:28:05.449964 containerd[2000]: 2025-12-16 12:28:05.415 [INFO][4585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Namespace="calico-system" Pod="whisker-c4db486f6-22tfh" WorkloadEndpoint="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" Dec 16 12:28:05.450065 containerd[2000]: 2025-12-16 12:28:05.416 [INFO][4585] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Namespace="calico-system" Pod="whisker-c4db486f6-22tfh" WorkloadEndpoint="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0", GenerateName:"whisker-c4db486f6-", Namespace:"calico-system", SelfLink:"", UID:"75aaea04-37f8-41d2-8060-6e5472e00f96", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c4db486f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e", Pod:"whisker-c4db486f6-22tfh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.29.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali301e6af2273", MAC:"8e:59:9c:4a:97:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:05.450177 containerd[2000]: 2025-12-16 12:28:05.438 [INFO][4585] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" Namespace="calico-system" Pod="whisker-c4db486f6-22tfh" WorkloadEndpoint="ip--172--31--28--27-k8s-whisker--c4db486f6--22tfh-eth0" Dec 16 12:28:05.522939 containerd[2000]: time="2025-12-16T12:28:05.522677856Z" level=info msg="connecting to shim 36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e" address="unix:///run/containerd/s/cb6a094a0121703bb81b52d0c8d7f0b762e3cb11779e6911501856f7f924513c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:05.582809 systemd[1]: Started cri-containerd-36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e.scope - libcontainer container 36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e. Dec 16 12:28:05.717576 containerd[2000]: time="2025-12-16T12:28:05.716102413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c4db486f6-22tfh,Uid:75aaea04-37f8-41d2-8060-6e5472e00f96,Namespace:calico-system,Attempt:0,} returns sandbox id \"36983808ee936dc1d7cf38c49db38c9cd5bef87520d695aaf87035eeb8b69f8e\"" Dec 16 12:28:05.722222 containerd[2000]: time="2025-12-16T12:28:05.722140297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:28:06.020526 containerd[2000]: time="2025-12-16T12:28:06.020171927Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:06.021901 containerd[2000]: time="2025-12-16T12:28:06.021704891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:28:06.021901 containerd[2000]: time="2025-12-16T12:28:06.021855119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:28:06.022565 kubelet[3537]: E1216 12:28:06.022445 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:28:06.024486 kubelet[3537]: E1216 12:28:06.023201 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:28:06.027521 kubelet[3537]: E1216 12:28:06.027337 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:958fd80b879b4ffea30414c254bfad02,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25hgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c4db486f6-22tfh_calico-system(75aaea04-37f8-41d2-8060-6e5472e00f96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:06.031759 containerd[2000]: time="2025-12-16T12:28:06.031434779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:28:06.298573 containerd[2000]: time="2025-12-16T12:28:06.297982812Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:06.301577 containerd[2000]: time="2025-12-16T12:28:06.301206048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:28:06.301577 containerd[2000]: time="2025-12-16T12:28:06.301294944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:28:06.302974 kubelet[3537]: E1216 12:28:06.302555 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:28:06.303612 kubelet[3537]: E1216 12:28:06.302823 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:28:06.304515 kubelet[3537]: E1216 12:28:06.304216 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25hgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c4db486f6-22tfh_calico-system(75aaea04-37f8-41d2-8060-6e5472e00f96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:06.306149 kubelet[3537]: E1216 12:28:06.305939 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:28:06.618488 kubelet[3537]: E1216 12:28:06.618192 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:28:06.740659 systemd-networkd[1865]: cali301e6af2273: Gained IPv6LL Dec 16 12:28:07.173931 containerd[2000]: time="2025-12-16T12:28:07.172350025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sxz7j,Uid:5fdee508-56a5-4d15-8a4a-c7c165668cad,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:07.173931 containerd[2000]: time="2025-12-16T12:28:07.172803337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smgbl,Uid:98415e12-7b72-4a86-b95c-2c6f6d6cfcd8,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:07.531288 systemd-networkd[1865]: vxlan.calico: Link UP Dec 16 12:28:07.531302 systemd-networkd[1865]: vxlan.calico: Gained carrier Dec 16 12:28:07.627382 (udev-worker)[4529]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:28:07.635582 kubelet[3537]: E1216 12:28:07.635065 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:28:07.961779 systemd-networkd[1865]: caliaa7f5ec9b32: Link UP Dec 16 12:28:07.964400 systemd-networkd[1865]: caliaa7f5ec9b32: Gained carrier Dec 16 12:28:08.012315 containerd[2000]: 2025-12-16 12:28:07.548 [INFO][4821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0 coredns-674b8bbfcf- kube-system 5fdee508-56a5-4d15-8a4a-c7c165668cad 881 0 2025-12-16 12:27:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-27 coredns-674b8bbfcf-sxz7j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa7f5ec9b32 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxz7j" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-" Dec 16 12:28:08.012315 containerd[2000]: 2025-12-16 12:28:07.549 [INFO][4821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxz7j" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" Dec 16 12:28:08.012315 containerd[2000]: 2025-12-16 12:28:07.752 [INFO][4858] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" HandleID="k8s-pod-network.5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Workload="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.753 [INFO][4858] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" HandleID="k8s-pod-network.5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Workload="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c00f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-27", "pod":"coredns-674b8bbfcf-sxz7j", "timestamp":"2025-12-16 12:28:07.752733832 +0000 UTC"}, Hostname:"ip-172-31-28-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.753 [INFO][4858] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.753 [INFO][4858] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.753 [INFO][4858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-27' Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.822 [INFO][4858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" host="ip-172-31-28-27" Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.872 [INFO][4858] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-27" Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.911 [INFO][4858] ipam/ipam.go 511: Trying affinity for 192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.916 [INFO][4858] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.920 [INFO][4858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:08.015373 containerd[2000]: 2025-12-16 12:28:07.920 [INFO][4858] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.64/26 handle="k8s-pod-network.5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" host="ip-172-31-28-27" Dec 16 12:28:08.017054 containerd[2000]: 2025-12-16 12:28:07.924 [INFO][4858] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559 Dec 16 12:28:08.017054 containerd[2000]: 2025-12-16 12:28:07.933 [INFO][4858] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.64/26 handle="k8s-pod-network.5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" host="ip-172-31-28-27" Dec 16 12:28:08.017054 containerd[2000]: 2025-12-16 12:28:07.949 [INFO][4858] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.66/26] block=192.168.29.64/26 handle="k8s-pod-network.5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" host="ip-172-31-28-27" Dec 16 12:28:08.017054 containerd[2000]: 2025-12-16 12:28:07.949 [INFO][4858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.66/26] handle="k8s-pod-network.5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" host="ip-172-31-28-27" Dec 16 12:28:08.017054 containerd[2000]: 2025-12-16 12:28:07.950 [INFO][4858] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:08.017054 containerd[2000]: 2025-12-16 12:28:07.950 [INFO][4858] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.66/26] IPv6=[] ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" HandleID="k8s-pod-network.5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Workload="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" Dec 16 12:28:08.020688 containerd[2000]: 2025-12-16 12:28:07.955 [INFO][4821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxz7j" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5fdee508-56a5-4d15-8a4a-c7c165668cad", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"", Pod:"coredns-674b8bbfcf-sxz7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa7f5ec9b32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:08.020688 containerd[2000]: 2025-12-16 12:28:07.955 [INFO][4821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.66/32] ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxz7j" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" Dec 16 12:28:08.020688 containerd[2000]: 2025-12-16 12:28:07.955 [INFO][4821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa7f5ec9b32 ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxz7j" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" Dec 16 12:28:08.020688 containerd[2000]: 2025-12-16 12:28:07.965 [INFO][4821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxz7j" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" Dec 16 12:28:08.020688 containerd[2000]: 2025-12-16 12:28:07.966 [INFO][4821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxz7j" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5fdee508-56a5-4d15-8a4a-c7c165668cad", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559", Pod:"coredns-674b8bbfcf-sxz7j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa7f5ec9b32", MAC:"2a:66:d0:ee:65:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:08.020688 containerd[2000]: 2025-12-16 12:28:07.999 [INFO][4821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" Namespace="kube-system" Pod="coredns-674b8bbfcf-sxz7j" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--sxz7j-eth0" Dec 16 12:28:08.089522 containerd[2000]: time="2025-12-16T12:28:08.089231101Z" level=info msg="connecting to shim 5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559" address="unix:///run/containerd/s/5b09654b517ccf4a406e6096f07c3e0c0682fc5717d85f779e3feb482cb8b114" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:08.105295 systemd-networkd[1865]: calia67895a0120: Link UP Dec 16 12:28:08.110588 systemd-networkd[1865]: calia67895a0120: Gained carrier Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:07.545 [INFO][4819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0 coredns-674b8bbfcf- kube-system 98415e12-7b72-4a86-b95c-2c6f6d6cfcd8 874 0 2025-12-16 12:27:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-27 coredns-674b8bbfcf-smgbl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia67895a0120 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Namespace="kube-system" Pod="coredns-674b8bbfcf-smgbl" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:07.547 [INFO][4819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Namespace="kube-system" Pod="coredns-674b8bbfcf-smgbl" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:07.769 [INFO][4856] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" HandleID="k8s-pod-network.314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Workload="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:07.772 [INFO][4856] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" HandleID="k8s-pod-network.314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Workload="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000123610), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-27", "pod":"coredns-674b8bbfcf-smgbl", "timestamp":"2025-12-16 12:28:07.769847224 +0000 UTC"}, Hostname:"ip-172-31-28-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:07.774 [INFO][4856] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:07.950 [INFO][4856] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:07.950 [INFO][4856] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-27' Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:07.978 [INFO][4856] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" host="ip-172-31-28-27" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:07.988 [INFO][4856] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-27" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.031 [INFO][4856] ipam/ipam.go 511: Trying affinity for 192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.035 [INFO][4856] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.040 [INFO][4856] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.041 [INFO][4856] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.64/26 handle="k8s-pod-network.314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" host="ip-172-31-28-27" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.044 [INFO][4856] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7 Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.053 [INFO][4856] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.64/26 handle="k8s-pod-network.314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" host="ip-172-31-28-27" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.077 [INFO][4856] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.67/26] block=192.168.29.64/26 handle="k8s-pod-network.314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" host="ip-172-31-28-27" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.077 [INFO][4856] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.67/26] handle="k8s-pod-network.314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" host="ip-172-31-28-27" Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.078 [INFO][4856] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:08.170403 containerd[2000]: 2025-12-16 12:28:08.078 [INFO][4856] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.67/26] IPv6=[] ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" HandleID="k8s-pod-network.314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Workload="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" Dec 16 12:28:08.171649 containerd[2000]: 2025-12-16 12:28:08.087 [INFO][4819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Namespace="kube-system" Pod="coredns-674b8bbfcf-smgbl" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"98415e12-7b72-4a86-b95c-2c6f6d6cfcd8", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"", Pod:"coredns-674b8bbfcf-smgbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia67895a0120", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:08.171649 containerd[2000]: 2025-12-16 12:28:08.090 [INFO][4819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.67/32] ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Namespace="kube-system" Pod="coredns-674b8bbfcf-smgbl" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" Dec 16 12:28:08.171649 containerd[2000]: 2025-12-16 12:28:08.090 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia67895a0120 ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Namespace="kube-system" Pod="coredns-674b8bbfcf-smgbl" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" Dec 16 12:28:08.171649 containerd[2000]: 2025-12-16 12:28:08.115 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Namespace="kube-system" Pod="coredns-674b8bbfcf-smgbl" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" Dec 16 12:28:08.171649 containerd[2000]: 2025-12-16 12:28:08.116 [INFO][4819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Namespace="kube-system" Pod="coredns-674b8bbfcf-smgbl" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"98415e12-7b72-4a86-b95c-2c6f6d6cfcd8", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7", Pod:"coredns-674b8bbfcf-smgbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.29.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia67895a0120", MAC:"ce:33:eb:c6:e1:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:08.171649 containerd[2000]: 2025-12-16 12:28:08.158 [INFO][4819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" Namespace="kube-system" Pod="coredns-674b8bbfcf-smgbl" WorkloadEndpoint="ip--172--31--28--27-k8s-coredns--674b8bbfcf--smgbl-eth0" Dec 16 12:28:08.175937 containerd[2000]: time="2025-12-16T12:28:08.171070958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-757bdf8b44-9gjd2,Uid:a301cdcf-9f24-4b62-9c32-ae5e7ca3de08,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:28:08.178000 systemd[1]: Started cri-containerd-5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559.scope - libcontainer container 5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559. Dec 16 12:28:08.292372 containerd[2000]: time="2025-12-16T12:28:08.292050278Z" level=info msg="connecting to shim 314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7" address="unix:///run/containerd/s/f364197d4539080dec6a4c28f035f4fffbdc64f5af786d4e17c48d4f6b4003ca" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:08.357803 containerd[2000]: time="2025-12-16T12:28:08.357626631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sxz7j,Uid:5fdee508-56a5-4d15-8a4a-c7c165668cad,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559\"" Dec 16 12:28:08.377150 containerd[2000]: time="2025-12-16T12:28:08.377100771Z" level=info msg="CreateContainer within sandbox \"5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:28:08.407848 systemd[1]: Started cri-containerd-314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7.scope - libcontainer container 314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7. Dec 16 12:28:08.429302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2068072800.mount: Deactivated successfully. Dec 16 12:28:08.435141 containerd[2000]: time="2025-12-16T12:28:08.434768043Z" level=info msg="Container 92548738fbaa98bd6f798050c43478a25c77664a2a02c0814fe9bc9713daf752: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:08.458090 containerd[2000]: time="2025-12-16T12:28:08.458032095Z" level=info msg="CreateContainer within sandbox \"5bee7645339fbbfd1ab15e3fc4f5d94bcd927d4c1d5c0496c760b02f14f09559\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92548738fbaa98bd6f798050c43478a25c77664a2a02c0814fe9bc9713daf752\"" Dec 16 12:28:08.463834 containerd[2000]: time="2025-12-16T12:28:08.463727943Z" level=info msg="StartContainer for \"92548738fbaa98bd6f798050c43478a25c77664a2a02c0814fe9bc9713daf752\"" Dec 16 12:28:08.471757 containerd[2000]: time="2025-12-16T12:28:08.470835039Z" level=info msg="connecting to shim 92548738fbaa98bd6f798050c43478a25c77664a2a02c0814fe9bc9713daf752" address="unix:///run/containerd/s/5b09654b517ccf4a406e6096f07c3e0c0682fc5717d85f779e3feb482cb8b114" protocol=ttrpc version=3 Dec 16 12:28:08.615756 systemd[1]: Started cri-containerd-92548738fbaa98bd6f798050c43478a25c77664a2a02c0814fe9bc9713daf752.scope - libcontainer container 92548738fbaa98bd6f798050c43478a25c77664a2a02c0814fe9bc9713daf752. Dec 16 12:28:08.620207 systemd[1]: Started sshd@9-172.31.28.27:22-139.178.89.65:48930.service - OpenSSH per-connection server daemon (139.178.89.65:48930). Dec 16 12:28:08.728173 containerd[2000]: time="2025-12-16T12:28:08.728088160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smgbl,Uid:98415e12-7b72-4a86-b95c-2c6f6d6cfcd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7\"" Dec 16 12:28:08.750303 containerd[2000]: time="2025-12-16T12:28:08.748738444Z" level=info msg="CreateContainer within sandbox \"314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:28:08.832050 containerd[2000]: time="2025-12-16T12:28:08.831728849Z" level=info msg="Container 3d19fdbc24c68bc3ed981a2e1edbe73ada6368fccaa3ffcd56deb632024b39fd: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:08.844489 systemd-networkd[1865]: cali2cdf9ed614e: Link UP Dec 16 12:28:08.846259 systemd-networkd[1865]: cali2cdf9ed614e: Gained carrier Dec 16 12:28:08.888569 containerd[2000]: time="2025-12-16T12:28:08.887661797Z" level=info msg="CreateContainer within sandbox \"314af45f65c85f3792b4265f30dd0de1d7cb93082523e129c448faddc7cd2bf7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d19fdbc24c68bc3ed981a2e1edbe73ada6368fccaa3ffcd56deb632024b39fd\"" Dec 16 12:28:08.894800 containerd[2000]: time="2025-12-16T12:28:08.894676649Z" level=info msg="StartContainer for \"3d19fdbc24c68bc3ed981a2e1edbe73ada6368fccaa3ffcd56deb632024b39fd\"" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.411 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0 calico-apiserver-757bdf8b44- calico-apiserver a301cdcf-9f24-4b62-9c32-ae5e7ca3de08 883 0 2025-12-16 12:27:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:757bdf8b44 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-27 calico-apiserver-757bdf8b44-9gjd2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2cdf9ed614e [] [] }} ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-9gjd2" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.412 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-9gjd2" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.565 [INFO][4990] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" HandleID="k8s-pod-network.1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Workload="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.566 [INFO][4990] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" HandleID="k8s-pod-network.1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Workload="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003214a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-27", "pod":"calico-apiserver-757bdf8b44-9gjd2", "timestamp":"2025-12-16 12:28:08.565587964 +0000 UTC"}, Hostname:"ip-172-31-28-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.566 [INFO][4990] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.566 [INFO][4990] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.566 [INFO][4990] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-27' Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.638 [INFO][4990] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" host="ip-172-31-28-27" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.701 [INFO][4990] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-27" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.730 [INFO][4990] ipam/ipam.go 511: Trying affinity for 192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.745 [INFO][4990] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.783 [INFO][4990] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.783 [INFO][4990] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.64/26 handle="k8s-pod-network.1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" host="ip-172-31-28-27" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.788 [INFO][4990] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5 Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.800 [INFO][4990] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.64/26 handle="k8s-pod-network.1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" host="ip-172-31-28-27" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.827 [INFO][4990] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.68/26] block=192.168.29.64/26 handle="k8s-pod-network.1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" host="ip-172-31-28-27" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.829 [INFO][4990] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.68/26] handle="k8s-pod-network.1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" host="ip-172-31-28-27" Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.829 [INFO][4990] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:08.900346 containerd[2000]: 2025-12-16 12:28:08.829 [INFO][4990] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.68/26] IPv6=[] ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" HandleID="k8s-pod-network.1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Workload="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" Dec 16 12:28:08.904132 containerd[2000]: 2025-12-16 12:28:08.835 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-9gjd2" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0", GenerateName:"calico-apiserver-757bdf8b44-", Namespace:"calico-apiserver", SelfLink:"", UID:"a301cdcf-9f24-4b62-9c32-ae5e7ca3de08", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"757bdf8b44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"", Pod:"calico-apiserver-757bdf8b44-9gjd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cdf9ed614e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:08.904132 containerd[2000]: 2025-12-16 12:28:08.835 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.68/32] ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-9gjd2" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" Dec 16 12:28:08.904132 containerd[2000]: 2025-12-16 12:28:08.835 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cdf9ed614e ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-9gjd2" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" Dec 16 12:28:08.904132 containerd[2000]: 2025-12-16 12:28:08.848 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-9gjd2" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" Dec 16 12:28:08.904132 containerd[2000]: 2025-12-16 12:28:08.848 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-9gjd2" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0", GenerateName:"calico-apiserver-757bdf8b44-", Namespace:"calico-apiserver", SelfLink:"", UID:"a301cdcf-9f24-4b62-9c32-ae5e7ca3de08", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"757bdf8b44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5", Pod:"calico-apiserver-757bdf8b44-9gjd2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cdf9ed614e", MAC:"4e:fb:de:f5:20:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:08.904132 containerd[2000]: 2025-12-16 12:28:08.884 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-9gjd2" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--9gjd2-eth0" Dec 16 12:28:08.906683 containerd[2000]: time="2025-12-16T12:28:08.906591665Z" level=info msg="connecting to shim 3d19fdbc24c68bc3ed981a2e1edbe73ada6368fccaa3ffcd56deb632024b39fd" address="unix:///run/containerd/s/f364197d4539080dec6a4c28f035f4fffbdc64f5af786d4e17c48d4f6b4003ca" protocol=ttrpc version=3 Dec 16 12:28:08.928006 sshd[5010]: Accepted publickey for core from 139.178.89.65 port 48930 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:08.935407 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:08.936377 containerd[2000]: time="2025-12-16T12:28:08.935859173Z" level=info msg="StartContainer for \"92548738fbaa98bd6f798050c43478a25c77664a2a02c0814fe9bc9713daf752\" returns successfully" Dec 16 12:28:08.956565 systemd-logind[1972]: New session 10 of user core. Dec 16 12:28:08.963762 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:28:09.008813 systemd[1]: Started cri-containerd-3d19fdbc24c68bc3ed981a2e1edbe73ada6368fccaa3ffcd56deb632024b39fd.scope - libcontainer container 3d19fdbc24c68bc3ed981a2e1edbe73ada6368fccaa3ffcd56deb632024b39fd. Dec 16 12:28:09.040923 containerd[2000]: time="2025-12-16T12:28:09.040837178Z" level=info msg="connecting to shim 1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5" address="unix:///run/containerd/s/570605f8465ab25e2a21623cc588d8906bdaf927fb37cf24289e73c5eca62f46" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:09.181995 systemd[1]: Started cri-containerd-1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5.scope - libcontainer container 1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5. Dec 16 12:28:09.223997 containerd[2000]: time="2025-12-16T12:28:09.222190035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv4z4,Uid:1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:09.223997 containerd[2000]: time="2025-12-16T12:28:09.223117731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7gkl,Uid:de3f24db-d343-45e7-a0cf-74925b070014,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:09.272000 containerd[2000]: time="2025-12-16T12:28:09.271887531Z" level=info msg="StartContainer for \"3d19fdbc24c68bc3ed981a2e1edbe73ada6368fccaa3ffcd56deb632024b39fd\" returns successfully" Dec 16 12:28:09.300708 systemd-networkd[1865]: vxlan.calico: Gained IPv6LL Dec 16 12:28:09.301975 systemd-networkd[1865]: caliaa7f5ec9b32: Gained IPv6LL Dec 16 12:28:09.704769 sshd[5060]: Connection closed by 139.178.89.65 port 48930 Dec 16 12:28:09.705872 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:09.724065 systemd[1]: sshd@9-172.31.28.27:22-139.178.89.65:48930.service: Deactivated successfully. Dec 16 12:28:09.735963 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:28:09.740810 systemd-logind[1972]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:28:09.747299 systemd-logind[1972]: Removed session 10. Dec 16 12:28:09.749040 systemd-networkd[1865]: calia67895a0120: Gained IPv6LL Dec 16 12:28:09.942157 kubelet[3537]: I1216 12:28:09.942059 3537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-smgbl" podStartSLOduration=54.942033906 podStartE2EDuration="54.942033906s" podCreationTimestamp="2025-12-16 12:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:28:09.792708246 +0000 UTC m=+60.939898468" watchObservedRunningTime="2025-12-16 12:28:09.942033906 +0000 UTC m=+61.089224116" Dec 16 12:28:10.069648 systemd-networkd[1865]: cali2cdf9ed614e: Gained IPv6LL Dec 16 12:28:10.107156 containerd[2000]: time="2025-12-16T12:28:10.107086683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-757bdf8b44-9gjd2,Uid:a301cdcf-9f24-4b62-9c32-ae5e7ca3de08,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1fc2ff81f46209cd9a3d7905fd58eacf058f128c23e27b2b3f4eba6824b518d5\"" Dec 16 12:28:10.115653 containerd[2000]: time="2025-12-16T12:28:10.115158807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:28:10.171805 containerd[2000]: time="2025-12-16T12:28:10.171706108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f7b777d7-gkwp7,Uid:c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:10.371187 systemd-networkd[1865]: calicbbd429b428: Link UP Dec 16 12:28:10.373864 systemd-networkd[1865]: calicbbd429b428: Gained carrier Dec 16 12:28:10.409047 containerd[2000]: time="2025-12-16T12:28:10.407741765Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:10.411482 containerd[2000]: time="2025-12-16T12:28:10.410877497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:28:10.411482 containerd[2000]: time="2025-12-16T12:28:10.410889017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:10.413915 kubelet[3537]: E1216 12:28:10.413775 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:10.413915 kubelet[3537]: E1216 12:28:10.413869 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:10.414682 kubelet[3537]: E1216 12:28:10.414545 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvxjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-757bdf8b44-9gjd2_calico-apiserver(a301cdcf-9f24-4b62-9c32-ae5e7ca3de08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:10.416089 kubelet[3537]: E1216 12:28:10.415984 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:28:10.457684 kubelet[3537]: I1216 12:28:10.457555 3537 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sxz7j" podStartSLOduration=55.457525445 podStartE2EDuration="55.457525445s" podCreationTimestamp="2025-12-16 12:27:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:28:10.139652775 +0000 UTC m=+61.286842985" watchObservedRunningTime="2025-12-16 12:28:10.457525445 +0000 UTC m=+61.604715631" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:09.710 [INFO][5132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0 csi-node-driver- calico-system de3f24db-d343-45e7-a0cf-74925b070014 779 0 2025-12-16 12:27:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-27 csi-node-driver-z7gkl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicbbd429b428 [] [] }} ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Namespace="calico-system" Pod="csi-node-driver-z7gkl" WorkloadEndpoint="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:09.711 [INFO][5132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Namespace="calico-system" Pod="csi-node-driver-z7gkl" WorkloadEndpoint="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:09.833 [INFO][5181] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" HandleID="k8s-pod-network.63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Workload="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:09.834 [INFO][5181] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" HandleID="k8s-pod-network.63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Workload="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400040dd10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-27", "pod":"csi-node-driver-z7gkl", "timestamp":"2025-12-16 12:28:09.833546394 +0000 UTC"}, Hostname:"ip-172-31-28-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:09.834 [INFO][5181] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:09.834 [INFO][5181] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:09.835 [INFO][5181] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-27' Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:09.987 [INFO][5181] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" host="ip-172-31-28-27" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.159 [INFO][5181] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-27" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.207 [INFO][5181] ipam/ipam.go 511: Trying affinity for 192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.215 [INFO][5181] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.223 [INFO][5181] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.223 [INFO][5181] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.64/26 handle="k8s-pod-network.63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" host="ip-172-31-28-27" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.227 [INFO][5181] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.256 [INFO][5181] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.64/26 handle="k8s-pod-network.63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" host="ip-172-31-28-27" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.332 [INFO][5181] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.69/26] block=192.168.29.64/26 handle="k8s-pod-network.63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" host="ip-172-31-28-27" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.332 [INFO][5181] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.69/26] handle="k8s-pod-network.63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" host="ip-172-31-28-27" Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.332 [INFO][5181] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:10.460122 containerd[2000]: 2025-12-16 12:28:10.334 [INFO][5181] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.69/26] IPv6=[] ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" HandleID="k8s-pod-network.63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Workload="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" Dec 16 12:28:10.462355 containerd[2000]: 2025-12-16 12:28:10.356 [INFO][5132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Namespace="calico-system" Pod="csi-node-driver-z7gkl" WorkloadEndpoint="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de3f24db-d343-45e7-a0cf-74925b070014", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"", Pod:"csi-node-driver-z7gkl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbbd429b428", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:10.462355 containerd[2000]: 2025-12-16 12:28:10.357 [INFO][5132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.69/32] ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Namespace="calico-system" Pod="csi-node-driver-z7gkl" WorkloadEndpoint="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" Dec 16 12:28:10.462355 containerd[2000]: 2025-12-16 12:28:10.357 [INFO][5132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbbd429b428 ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Namespace="calico-system" Pod="csi-node-driver-z7gkl" WorkloadEndpoint="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" Dec 16 12:28:10.462355 containerd[2000]: 2025-12-16 12:28:10.372 [INFO][5132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Namespace="calico-system" Pod="csi-node-driver-z7gkl" WorkloadEndpoint="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" Dec 16 12:28:10.462355 containerd[2000]: 2025-12-16 12:28:10.376 [INFO][5132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Namespace="calico-system" Pod="csi-node-driver-z7gkl" WorkloadEndpoint="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de3f24db-d343-45e7-a0cf-74925b070014", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e", Pod:"csi-node-driver-z7gkl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.29.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbbd429b428", MAC:"36:31:07:69:50:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:10.462355 containerd[2000]: 2025-12-16 12:28:10.454 [INFO][5132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" Namespace="calico-system" Pod="csi-node-driver-z7gkl" WorkloadEndpoint="ip--172--31--28--27-k8s-csi--node--driver--z7gkl-eth0" Dec 16 12:28:10.555681 containerd[2000]: time="2025-12-16T12:28:10.555413849Z" level=info msg="connecting to shim 63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e" address="unix:///run/containerd/s/008b5e2527bec4ef06111c7a143fa17c354b086c3e9663790fb70eee982a8434" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:10.661341 systemd-networkd[1865]: cali799267c9527: Link UP Dec 16 12:28:10.662908 systemd-networkd[1865]: cali799267c9527: Gained carrier Dec 16 12:28:10.702904 systemd[1]: Started cri-containerd-63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e.scope - libcontainer container 63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e. Dec 16 12:28:10.733877 kubelet[3537]: E1216 12:28:10.733625 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:09.716 [INFO][5129] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0 goldmane-666569f655- calico-system 1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7 886 0 2025-12-16 12:27:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-27 goldmane-666569f655-nv4z4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali799267c9527 [] [] }} ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Namespace="calico-system" Pod="goldmane-666569f655-nv4z4" WorkloadEndpoint="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:09.718 [INFO][5129] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Namespace="calico-system" Pod="goldmane-666569f655-nv4z4" WorkloadEndpoint="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:09.908 [INFO][5185] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" HandleID="k8s-pod-network.66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Workload="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:09.908 [INFO][5185] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" HandleID="k8s-pod-network.66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Workload="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ab0c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-27", "pod":"goldmane-666569f655-nv4z4", "timestamp":"2025-12-16 12:28:09.908116674 +0000 UTC"}, Hostname:"ip-172-31-28-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:09.908 [INFO][5185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.332 [INFO][5185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.333 [INFO][5185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-27' Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.434 [INFO][5185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" host="ip-172-31-28-27" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.474 [INFO][5185] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-27" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.505 [INFO][5185] ipam/ipam.go 511: Trying affinity for 192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.520 [INFO][5185] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.538 [INFO][5185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.541 [INFO][5185] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.64/26 handle="k8s-pod-network.66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" host="ip-172-31-28-27" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.551 [INFO][5185] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65 Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.582 [INFO][5185] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.64/26 handle="k8s-pod-network.66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" host="ip-172-31-28-27" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.638 [INFO][5185] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.70/26] block=192.168.29.64/26 handle="k8s-pod-network.66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" host="ip-172-31-28-27" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.638 [INFO][5185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.70/26] handle="k8s-pod-network.66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" host="ip-172-31-28-27" Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.641 [INFO][5185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:10.736632 containerd[2000]: 2025-12-16 12:28:10.641 [INFO][5185] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.70/26] IPv6=[] ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" HandleID="k8s-pod-network.66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Workload="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" Dec 16 12:28:10.739396 containerd[2000]: 2025-12-16 12:28:10.655 [INFO][5129] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Namespace="calico-system" Pod="goldmane-666569f655-nv4z4" WorkloadEndpoint="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"", Pod:"goldmane-666569f655-nv4z4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali799267c9527", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:10.739396 containerd[2000]: 2025-12-16 12:28:10.656 [INFO][5129] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.70/32] ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Namespace="calico-system" Pod="goldmane-666569f655-nv4z4" WorkloadEndpoint="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" Dec 16 12:28:10.739396 containerd[2000]: 2025-12-16 12:28:10.656 [INFO][5129] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali799267c9527 ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Namespace="calico-system" Pod="goldmane-666569f655-nv4z4" WorkloadEndpoint="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" Dec 16 12:28:10.739396 containerd[2000]: 2025-12-16 12:28:10.663 [INFO][5129] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Namespace="calico-system" Pod="goldmane-666569f655-nv4z4" WorkloadEndpoint="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" Dec 16 12:28:10.739396 containerd[2000]: 2025-12-16 12:28:10.664 [INFO][5129] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Namespace="calico-system" Pod="goldmane-666569f655-nv4z4" WorkloadEndpoint="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65", Pod:"goldmane-666569f655-nv4z4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.29.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali799267c9527", MAC:"aa:90:e2:da:fe:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:10.739396 containerd[2000]: 2025-12-16 12:28:10.715 [INFO][5129] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" Namespace="calico-system" Pod="goldmane-666569f655-nv4z4" WorkloadEndpoint="ip--172--31--28--27-k8s-goldmane--666569f655--nv4z4-eth0" Dec 16 12:28:10.843606 containerd[2000]: time="2025-12-16T12:28:10.843272683Z" level=info msg="connecting to shim 66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65" address="unix:///run/containerd/s/ecb3801176af93c26024cf2cd06df3f1e61bdd2bf139fc50298369a92b9a14a7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:10.945942 systemd-networkd[1865]: cali5c48060a545: Link UP Dec 16 12:28:10.952064 systemd-networkd[1865]: cali5c48060a545: Gained carrier Dec 16 12:28:10.998619 systemd[1]: Started cri-containerd-66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65.scope - libcontainer container 66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65. Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.428 [INFO][5221] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0 calico-kube-controllers-64f7b777d7- calico-system c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49 884 0 2025-12-16 12:27:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64f7b777d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-27 calico-kube-controllers-64f7b777d7-gkwp7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5c48060a545 [] [] }} ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Namespace="calico-system" Pod="calico-kube-controllers-64f7b777d7-gkwp7" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.429 [INFO][5221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Namespace="calico-system" Pod="calico-kube-controllers-64f7b777d7-gkwp7" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.625 [INFO][5246] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" HandleID="k8s-pod-network.ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Workload="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.626 [INFO][5246] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" HandleID="k8s-pod-network.ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Workload="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003124c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-27", "pod":"calico-kube-controllers-64f7b777d7-gkwp7", "timestamp":"2025-12-16 12:28:10.625258002 +0000 UTC"}, Hostname:"ip-172-31-28-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.627 [INFO][5246] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.638 [INFO][5246] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.638 [INFO][5246] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-27' Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.711 [INFO][5246] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" host="ip-172-31-28-27" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.742 [INFO][5246] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-27" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.770 [INFO][5246] ipam/ipam.go 511: Trying affinity for 192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.783 [INFO][5246] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.796 [INFO][5246] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.798 [INFO][5246] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.64/26 handle="k8s-pod-network.ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" host="ip-172-31-28-27" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.819 [INFO][5246] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.845 [INFO][5246] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.64/26 handle="k8s-pod-network.ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" host="ip-172-31-28-27" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.900 [INFO][5246] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.71/26] block=192.168.29.64/26 handle="k8s-pod-network.ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" host="ip-172-31-28-27" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.901 [INFO][5246] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.71/26] handle="k8s-pod-network.ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" host="ip-172-31-28-27" Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.901 [INFO][5246] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:11.002740 containerd[2000]: 2025-12-16 12:28:10.901 [INFO][5246] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.71/26] IPv6=[] ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" HandleID="k8s-pod-network.ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Workload="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" Dec 16 12:28:11.004182 containerd[2000]: 2025-12-16 12:28:10.915 [INFO][5221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Namespace="calico-system" Pod="calico-kube-controllers-64f7b777d7-gkwp7" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0", GenerateName:"calico-kube-controllers-64f7b777d7-", Namespace:"calico-system", SelfLink:"", UID:"c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f7b777d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"", Pod:"calico-kube-controllers-64f7b777d7-gkwp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c48060a545", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:11.004182 containerd[2000]: 2025-12-16 12:28:10.917 [INFO][5221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.71/32] ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Namespace="calico-system" Pod="calico-kube-controllers-64f7b777d7-gkwp7" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" Dec 16 12:28:11.004182 containerd[2000]: 2025-12-16 12:28:10.925 [INFO][5221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c48060a545 ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Namespace="calico-system" Pod="calico-kube-controllers-64f7b777d7-gkwp7" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" Dec 16 12:28:11.004182 containerd[2000]: 2025-12-16 12:28:10.955 [INFO][5221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Namespace="calico-system" Pod="calico-kube-controllers-64f7b777d7-gkwp7" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" Dec 16 12:28:11.004182 containerd[2000]: 2025-12-16 12:28:10.962 [INFO][5221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Namespace="calico-system" Pod="calico-kube-controllers-64f7b777d7-gkwp7" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0", GenerateName:"calico-kube-controllers-64f7b777d7-", Namespace:"calico-system", SelfLink:"", UID:"c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64f7b777d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb", Pod:"calico-kube-controllers-64f7b777d7-gkwp7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.29.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c48060a545", MAC:"9e:3c:02:d4:44:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:11.004182 containerd[2000]: 2025-12-16 12:28:10.992 [INFO][5221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" Namespace="calico-system" Pod="calico-kube-controllers-64f7b777d7-gkwp7" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--kube--controllers--64f7b777d7--gkwp7-eth0" Dec 16 12:28:11.146869 containerd[2000]: time="2025-12-16T12:28:11.146787376Z" level=info msg="connecting to shim ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb" address="unix:///run/containerd/s/408fb042131470083e62d57eb255810d872c0931f3c590ce2201e50eb107b216" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:11.171009 containerd[2000]: time="2025-12-16T12:28:11.170152168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7gkl,Uid:de3f24db-d343-45e7-a0cf-74925b070014,Namespace:calico-system,Attempt:0,} returns sandbox id \"63baf2a2623f07e6297d028c893c01f3d013c3b30881b1dccaa3404a04d1085e\"" Dec 16 12:28:11.176899 containerd[2000]: time="2025-12-16T12:28:11.176717633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:28:11.178886 containerd[2000]: time="2025-12-16T12:28:11.178723421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-757bdf8b44-h2nb9,Uid:76e6c14e-6dea-41f8-8e8a-730830194387,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:28:11.265800 systemd[1]: Started cri-containerd-ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb.scope - libcontainer container ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb. Dec 16 12:28:11.494320 containerd[2000]: time="2025-12-16T12:28:11.494144814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:11.504653 containerd[2000]: time="2025-12-16T12:28:11.504361854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:28:11.505140 containerd[2000]: time="2025-12-16T12:28:11.505071702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:28:11.508243 kubelet[3537]: E1216 12:28:11.508166 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:28:11.511972 kubelet[3537]: E1216 12:28:11.508252 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:28:11.511972 kubelet[3537]: E1216 12:28:11.510530 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr8d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:11.528145 containerd[2000]: time="2025-12-16T12:28:11.524867022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:28:11.538482 containerd[2000]: time="2025-12-16T12:28:11.537723042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nv4z4,Uid:1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"66059f8d28d0551e8a06594b681748cad09e91cb2cf850083a5199c664896c65\"" Dec 16 12:28:11.603085 systemd-networkd[1865]: calidd57830d10d: Link UP Dec 16 12:28:11.605784 systemd-networkd[1865]: calidd57830d10d: Gained carrier Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.372 [INFO][5385] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0 calico-apiserver-757bdf8b44- calico-apiserver 76e6c14e-6dea-41f8-8e8a-730830194387 882 0 2025-12-16 12:27:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:757bdf8b44 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-27 calico-apiserver-757bdf8b44-h2nb9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd57830d10d [] [] }} ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-h2nb9" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.373 [INFO][5385] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-h2nb9" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.446 [INFO][5409] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" HandleID="k8s-pod-network.c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Workload="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.447 [INFO][5409] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" HandleID="k8s-pod-network.c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Workload="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d39d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-27", "pod":"calico-apiserver-757bdf8b44-h2nb9", "timestamp":"2025-12-16 12:28:11.44675307 +0000 UTC"}, Hostname:"ip-172-31-28-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.447 [INFO][5409] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.447 [INFO][5409] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.447 [INFO][5409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-27' Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.471 [INFO][5409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" host="ip-172-31-28-27" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.488 [INFO][5409] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-27" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.507 [INFO][5409] ipam/ipam.go 511: Trying affinity for 192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.520 [INFO][5409] ipam/ipam.go 158: Attempting to load block cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.546 [INFO][5409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.29.64/26 host="ip-172-31-28-27" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.547 [INFO][5409] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.29.64/26 handle="k8s-pod-network.c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" host="ip-172-31-28-27" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.551 [INFO][5409] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.568 [INFO][5409] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.29.64/26 handle="k8s-pod-network.c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" host="ip-172-31-28-27" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.586 [INFO][5409] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.29.72/26] block=192.168.29.64/26 handle="k8s-pod-network.c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" host="ip-172-31-28-27" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.586 [INFO][5409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.29.72/26] handle="k8s-pod-network.c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" host="ip-172-31-28-27" Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.587 [INFO][5409] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:11.648208 containerd[2000]: 2025-12-16 12:28:11.587 [INFO][5409] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.29.72/26] IPv6=[] ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" HandleID="k8s-pod-network.c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Workload="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" Dec 16 12:28:11.649972 containerd[2000]: 2025-12-16 12:28:11.591 [INFO][5385] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-h2nb9" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0", GenerateName:"calico-apiserver-757bdf8b44-", Namespace:"calico-apiserver", SelfLink:"", UID:"76e6c14e-6dea-41f8-8e8a-730830194387", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"757bdf8b44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"", Pod:"calico-apiserver-757bdf8b44-h2nb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd57830d10d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:11.649972 containerd[2000]: 2025-12-16 12:28:11.591 [INFO][5385] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.29.72/32] ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-h2nb9" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" Dec 16 12:28:11.649972 containerd[2000]: 2025-12-16 12:28:11.592 [INFO][5385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd57830d10d ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-h2nb9" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" Dec 16 12:28:11.649972 containerd[2000]: 2025-12-16 12:28:11.604 [INFO][5385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-h2nb9" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" Dec 16 12:28:11.649972 containerd[2000]: 2025-12-16 12:28:11.608 [INFO][5385] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-h2nb9" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0", GenerateName:"calico-apiserver-757bdf8b44-", Namespace:"calico-apiserver", SelfLink:"", UID:"76e6c14e-6dea-41f8-8e8a-730830194387", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"757bdf8b44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-27", ContainerID:"c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e", Pod:"calico-apiserver-757bdf8b44-h2nb9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.29.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd57830d10d", MAC:"4a:5b:f1:a8:2a:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:11.649972 containerd[2000]: 2025-12-16 12:28:11.641 [INFO][5385] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" Namespace="calico-apiserver" Pod="calico-apiserver-757bdf8b44-h2nb9" WorkloadEndpoint="ip--172--31--28--27-k8s-calico--apiserver--757bdf8b44--h2nb9-eth0" Dec 16 12:28:11.736900 containerd[2000]: time="2025-12-16T12:28:11.736448299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64f7b777d7-gkwp7,Uid:c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba554d8cde9bb5d00b6753ca836ef0a80a7de0364b352c1dcda806d871f7bbcb\"" Dec 16 12:28:11.752315 containerd[2000]: time="2025-12-16T12:28:11.750990583Z" level=info msg="connecting to shim c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e" address="unix:///run/containerd/s/452b9b5ff29be35edbc5f30d471cd57e1ded3e0e9bd5c83fe668bc78d9860951" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:11.754322 kubelet[3537]: E1216 12:28:11.754261 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:28:11.792876 containerd[2000]: time="2025-12-16T12:28:11.790330076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:11.797868 systemd-networkd[1865]: calicbbd429b428: Gained IPv6LL Dec 16 12:28:11.799755 containerd[2000]: time="2025-12-16T12:28:11.799506296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:28:11.799755 containerd[2000]: time="2025-12-16T12:28:11.799581704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:28:11.802295 kubelet[3537]: E1216 12:28:11.801304 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:28:11.802295 kubelet[3537]: E1216 12:28:11.801424 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:28:11.802295 kubelet[3537]: E1216 12:28:11.801753 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr8d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:11.803293 containerd[2000]: time="2025-12-16T12:28:11.803220056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:28:11.803834 kubelet[3537]: E1216 12:28:11.803598 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:28:11.861772 systemd[1]: Started cri-containerd-c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e.scope - libcontainer container c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e. Dec 16 12:28:11.960321 containerd[2000]: time="2025-12-16T12:28:11.960232688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-757bdf8b44-h2nb9,Uid:76e6c14e-6dea-41f8-8e8a-730830194387,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c1b970d8fc0aecd5e9c429d83603874c927fbe18950922120c6531e35daaf36e\"" Dec 16 12:28:11.988966 systemd-networkd[1865]: cali799267c9527: Gained IPv6LL Dec 16 12:28:12.086249 containerd[2000]: time="2025-12-16T12:28:12.086081945Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:12.088972 containerd[2000]: time="2025-12-16T12:28:12.088836053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:28:12.088972 containerd[2000]: time="2025-12-16T12:28:12.088923197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:12.089717 kubelet[3537]: E1216 12:28:12.089344 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:28:12.090147 kubelet[3537]: E1216 12:28:12.089892 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:28:12.090548 containerd[2000]: time="2025-12-16T12:28:12.090493325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:28:12.091729 kubelet[3537]: E1216 12:28:12.091585 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jql9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nv4z4_calico-system(1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:12.093100 kubelet[3537]: E1216 12:28:12.093040 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:28:12.403658 containerd[2000]: time="2025-12-16T12:28:12.403585831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:12.405866 containerd[2000]: time="2025-12-16T12:28:12.405792307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:28:12.405866 containerd[2000]: time="2025-12-16T12:28:12.405823087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:28:12.406606 kubelet[3537]: E1216 12:28:12.406248 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:28:12.406606 kubelet[3537]: E1216 12:28:12.406310 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:28:12.407004 kubelet[3537]: E1216 12:28:12.406887 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64f7b777d7-gkwp7_calico-system(c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:12.407754 containerd[2000]: time="2025-12-16T12:28:12.407621011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:28:12.408808 kubelet[3537]: E1216 12:28:12.408730 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:28:12.692804 systemd-networkd[1865]: cali5c48060a545: Gained IPv6LL Dec 16 12:28:12.702540 containerd[2000]: time="2025-12-16T12:28:12.702479096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:12.704754 containerd[2000]: time="2025-12-16T12:28:12.704692688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:28:12.704917 containerd[2000]: time="2025-12-16T12:28:12.704819996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:12.705264 kubelet[3537]: E1216 12:28:12.705148 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:12.707274 kubelet[3537]: E1216 12:28:12.705847 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:12.707274 kubelet[3537]: E1216 12:28:12.706056 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsrd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-757bdf8b44-h2nb9_calico-apiserver(76e6c14e-6dea-41f8-8e8a-730830194387): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:12.707764 kubelet[3537]: E1216 12:28:12.707645 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:28:12.762055 kubelet[3537]: E1216 12:28:12.761591 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:28:12.768938 kubelet[3537]: E1216 12:28:12.768865 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:28:12.773201 kubelet[3537]: E1216 12:28:12.773127 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:28:12.775892 kubelet[3537]: E1216 12:28:12.775787 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:28:12.948801 systemd-networkd[1865]: calidd57830d10d: Gained IPv6LL Dec 16 12:28:13.771409 kubelet[3537]: E1216 12:28:13.771014 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:28:13.771409 kubelet[3537]: E1216 12:28:13.771073 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:28:14.743604 systemd[1]: Started sshd@10-172.31.28.27:22-139.178.89.65:53908.service - OpenSSH per-connection server daemon (139.178.89.65:53908). Dec 16 12:28:14.957653 sshd[5497]: Accepted publickey for core from 139.178.89.65 port 53908 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:14.963107 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:14.979181 systemd-logind[1972]: New session 11 of user core. Dec 16 12:28:14.985889 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:28:15.341102 sshd[5501]: Connection closed by 139.178.89.65 port 53908 Dec 16 12:28:15.343150 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:15.354992 systemd[1]: sshd@10-172.31.28.27:22-139.178.89.65:53908.service: Deactivated successfully. Dec 16 12:28:15.365144 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:28:15.368185 systemd-logind[1972]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:28:15.371979 systemd-logind[1972]: Removed session 11. Dec 16 12:28:15.671602 ntpd[2177]: Listen normally on 6 vxlan.calico 192.168.29.64:123 Dec 16 12:28:15.671689 ntpd[2177]: Listen normally on 7 cali301e6af2273 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 6 vxlan.calico 192.168.29.64:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 7 cali301e6af2273 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 8 vxlan.calico [fe80::646b:8cff:fec9:48a5%5]:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 9 caliaa7f5ec9b32 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 10 calia67895a0120 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 11 cali2cdf9ed614e [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 12 calicbbd429b428 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 13 cali799267c9527 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 14 cali5c48060a545 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 12:28:15.672370 ntpd[2177]: 16 Dec 12:28:15 ntpd[2177]: Listen normally on 15 calidd57830d10d [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 12:28:15.671736 ntpd[2177]: Listen normally on 8 vxlan.calico [fe80::646b:8cff:fec9:48a5%5]:123 Dec 16 12:28:15.671784 ntpd[2177]: Listen normally on 9 caliaa7f5ec9b32 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 12:28:15.671827 ntpd[2177]: Listen normally on 10 calia67895a0120 [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 12:28:15.671871 ntpd[2177]: Listen normally on 11 cali2cdf9ed614e [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 12:28:15.671919 ntpd[2177]: Listen normally on 12 calicbbd429b428 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 12:28:15.671962 ntpd[2177]: Listen normally on 13 cali799267c9527 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 12:28:15.672005 ntpd[2177]: Listen normally on 14 cali5c48060a545 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 12:28:15.672048 ntpd[2177]: Listen normally on 15 calidd57830d10d [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 12:28:20.384075 systemd[1]: Started sshd@11-172.31.28.27:22-139.178.89.65:51014.service - OpenSSH per-connection server daemon (139.178.89.65:51014). Dec 16 12:28:20.592576 sshd[5529]: Accepted publickey for core from 139.178.89.65 port 51014 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:20.596606 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:20.604715 systemd-logind[1972]: New session 12 of user core. Dec 16 12:28:20.613781 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:28:20.878278 sshd[5532]: Connection closed by 139.178.89.65 port 51014 Dec 16 12:28:20.879623 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:20.892399 systemd-logind[1972]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:28:20.895391 systemd[1]: sshd@11-172.31.28.27:22-139.178.89.65:51014.service: Deactivated successfully. Dec 16 12:28:20.900239 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:28:20.919173 systemd-logind[1972]: Removed session 12. Dec 16 12:28:20.922741 systemd[1]: Started sshd@12-172.31.28.27:22-139.178.89.65:51028.service - OpenSSH per-connection server daemon (139.178.89.65:51028). Dec 16 12:28:21.125185 sshd[5544]: Accepted publickey for core from 139.178.89.65 port 51028 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:21.127773 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:21.137605 systemd-logind[1972]: New session 13 of user core. Dec 16 12:28:21.145709 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:28:21.172428 containerd[2000]: time="2025-12-16T12:28:21.172368722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:28:21.419815 containerd[2000]: time="2025-12-16T12:28:21.419651007Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:21.421956 containerd[2000]: time="2025-12-16T12:28:21.421883199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:28:21.422151 containerd[2000]: time="2025-12-16T12:28:21.422019843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:28:21.422469 kubelet[3537]: E1216 12:28:21.422398 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:28:21.423606 kubelet[3537]: E1216 12:28:21.423409 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:28:21.423842 kubelet[3537]: E1216 12:28:21.423764 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:958fd80b879b4ffea30414c254bfad02,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25hgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c4db486f6-22tfh_calico-system(75aaea04-37f8-41d2-8060-6e5472e00f96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:21.426756 containerd[2000]: time="2025-12-16T12:28:21.426512775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:28:21.483612 sshd[5547]: Connection closed by 139.178.89.65 port 51028 Dec 16 12:28:21.486754 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:21.498050 systemd[1]: sshd@12-172.31.28.27:22-139.178.89.65:51028.service: Deactivated successfully. Dec 16 12:28:21.503921 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:28:21.507311 systemd-logind[1972]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:28:21.533951 systemd[1]: Started sshd@13-172.31.28.27:22-139.178.89.65:51034.service - OpenSSH per-connection server daemon (139.178.89.65:51034). Dec 16 12:28:21.540194 systemd-logind[1972]: Removed session 13. Dec 16 12:28:21.715772 containerd[2000]: time="2025-12-16T12:28:21.715573325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:21.718602 containerd[2000]: time="2025-12-16T12:28:21.718486961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:28:21.719123 containerd[2000]: time="2025-12-16T12:28:21.718510517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:28:21.719198 kubelet[3537]: E1216 12:28:21.718848 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:28:21.719198 kubelet[3537]: E1216 12:28:21.718916 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:28:21.719589 kubelet[3537]: E1216 12:28:21.719448 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25hgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c4db486f6-22tfh_calico-system(75aaea04-37f8-41d2-8060-6e5472e00f96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:21.720848 kubelet[3537]: E1216 12:28:21.720756 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:28:21.739522 sshd[5557]: Accepted publickey for core from 139.178.89.65 port 51034 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:21.741188 sshd-session[5557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:21.753280 systemd-logind[1972]: New session 14 of user core. Dec 16 12:28:21.758768 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:28:22.035419 sshd[5562]: Connection closed by 139.178.89.65 port 51034 Dec 16 12:28:22.036064 sshd-session[5557]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:22.044228 systemd[1]: sshd@13-172.31.28.27:22-139.178.89.65:51034.service: Deactivated successfully. Dec 16 12:28:22.049127 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:28:22.053534 systemd-logind[1972]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:28:22.056572 systemd-logind[1972]: Removed session 14. Dec 16 12:28:24.171435 containerd[2000]: time="2025-12-16T12:28:24.171301781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:28:24.461639 containerd[2000]: time="2025-12-16T12:28:24.461173495Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:24.463503 containerd[2000]: time="2025-12-16T12:28:24.463347223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:28:24.463503 containerd[2000]: time="2025-12-16T12:28:24.463416475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:28:24.465074 kubelet[3537]: E1216 12:28:24.464021 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:28:24.465074 kubelet[3537]: E1216 12:28:24.464098 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:28:24.465074 kubelet[3537]: E1216 12:28:24.464441 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr8d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:24.465890 containerd[2000]: time="2025-12-16T12:28:24.464525179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:28:24.764903 containerd[2000]: time="2025-12-16T12:28:24.764741012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:24.768662 containerd[2000]: time="2025-12-16T12:28:24.768578072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:28:24.768821 containerd[2000]: time="2025-12-16T12:28:24.768587360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:24.769168 kubelet[3537]: E1216 12:28:24.769066 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:24.769168 kubelet[3537]: E1216 12:28:24.769129 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:24.770410 kubelet[3537]: E1216 12:28:24.769812 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvxjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-757bdf8b44-9gjd2_calico-apiserver(a301cdcf-9f24-4b62-9c32-ae5e7ca3de08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:24.771145 containerd[2000]: time="2025-12-16T12:28:24.770864624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:28:24.772399 kubelet[3537]: E1216 12:28:24.772265 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:28:25.039738 containerd[2000]: time="2025-12-16T12:28:25.039429353Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:25.041479 containerd[2000]: time="2025-12-16T12:28:25.041326373Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:28:25.041667 containerd[2000]: time="2025-12-16T12:28:25.041637293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:28:25.042362 kubelet[3537]: E1216 12:28:25.041966 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:28:25.042362 kubelet[3537]: E1216 12:28:25.042036 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:28:25.042362 kubelet[3537]: E1216 12:28:25.042228 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr8d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:25.044088 kubelet[3537]: E1216 12:28:25.043999 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:28:26.170638 containerd[2000]: time="2025-12-16T12:28:26.170188735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:28:26.467278 containerd[2000]: time="2025-12-16T12:28:26.467104196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:26.469121 containerd[2000]: time="2025-12-16T12:28:26.469055096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:28:26.469235 containerd[2000]: time="2025-12-16T12:28:26.469190264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:28:26.469578 kubelet[3537]: E1216 12:28:26.469520 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:28:26.470632 kubelet[3537]: E1216 12:28:26.470102 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:28:26.470632 kubelet[3537]: E1216 12:28:26.470471 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64f7b777d7-gkwp7_calico-system(c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:26.471593 containerd[2000]: time="2025-12-16T12:28:26.470928248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:28:26.472038 kubelet[3537]: E1216 12:28:26.471835 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:28:26.738936 containerd[2000]: time="2025-12-16T12:28:26.738733834Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:26.740950 containerd[2000]: time="2025-12-16T12:28:26.740882578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:28:26.741198 containerd[2000]: time="2025-12-16T12:28:26.741062878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:26.741510 kubelet[3537]: E1216 12:28:26.741419 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:26.741617 kubelet[3537]: E1216 12:28:26.741509 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:26.741809 kubelet[3537]: E1216 12:28:26.741720 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsrd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-757bdf8b44-h2nb9_calico-apiserver(76e6c14e-6dea-41f8-8e8a-730830194387): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:26.743303 kubelet[3537]: E1216 12:28:26.743238 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:28:27.080183 systemd[1]: Started sshd@14-172.31.28.27:22-139.178.89.65:51036.service - OpenSSH per-connection server daemon (139.178.89.65:51036). Dec 16 12:28:27.284592 sshd[5579]: Accepted publickey for core from 139.178.89.65 port 51036 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:27.287194 sshd-session[5579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:27.296860 systemd-logind[1972]: New session 15 of user core. Dec 16 12:28:27.311743 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:28:27.558795 sshd[5582]: Connection closed by 139.178.89.65 port 51036 Dec 16 12:28:27.559677 sshd-session[5579]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:27.566941 systemd[1]: sshd@14-172.31.28.27:22-139.178.89.65:51036.service: Deactivated successfully. Dec 16 12:28:27.574131 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:28:27.577195 systemd-logind[1972]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:28:27.580662 systemd-logind[1972]: Removed session 15. Dec 16 12:28:28.173038 containerd[2000]: time="2025-12-16T12:28:28.172966977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:28:28.455176 containerd[2000]: time="2025-12-16T12:28:28.455037070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:28.457397 containerd[2000]: time="2025-12-16T12:28:28.457258114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:28:28.457564 containerd[2000]: time="2025-12-16T12:28:28.457322746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:28.458605 kubelet[3537]: E1216 12:28:28.457765 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:28:28.458605 kubelet[3537]: E1216 12:28:28.457833 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:28:28.458605 kubelet[3537]: E1216 12:28:28.458016 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jql9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nv4z4_calico-system(1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:28.459835 kubelet[3537]: E1216 12:28:28.459699 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:28:32.597798 systemd[1]: Started sshd@15-172.31.28.27:22-139.178.89.65:38854.service - OpenSSH per-connection server daemon (139.178.89.65:38854). Dec 16 12:28:32.792084 sshd[5602]: Accepted publickey for core from 139.178.89.65 port 38854 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:32.794404 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:32.802503 systemd-logind[1972]: New session 16 of user core. Dec 16 12:28:32.814733 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:28:33.060027 sshd[5605]: Connection closed by 139.178.89.65 port 38854 Dec 16 12:28:33.060872 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:33.066811 systemd[1]: sshd@15-172.31.28.27:22-139.178.89.65:38854.service: Deactivated successfully. Dec 16 12:28:33.072558 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:28:33.074842 systemd-logind[1972]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:28:33.079573 systemd-logind[1972]: Removed session 16. Dec 16 12:28:35.174272 kubelet[3537]: E1216 12:28:35.174001 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:28:38.108870 systemd[1]: Started sshd@16-172.31.28.27:22-139.178.89.65:38862.service - OpenSSH per-connection server daemon (139.178.89.65:38862). Dec 16 12:28:38.320474 sshd[5643]: Accepted publickey for core from 139.178.89.65 port 38862 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:38.322807 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:38.332222 systemd-logind[1972]: New session 17 of user core. Dec 16 12:28:38.338739 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:28:38.607767 sshd[5646]: Connection closed by 139.178.89.65 port 38862 Dec 16 12:28:38.607244 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:38.616063 systemd[1]: sshd@16-172.31.28.27:22-139.178.89.65:38862.service: Deactivated successfully. Dec 16 12:28:38.621128 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:28:38.624793 systemd-logind[1972]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:28:38.628326 systemd-logind[1972]: Removed session 17. Dec 16 12:28:39.176803 kubelet[3537]: E1216 12:28:39.176726 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:28:39.179270 kubelet[3537]: E1216 12:28:39.179139 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:28:39.179813 kubelet[3537]: E1216 12:28:39.179764 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:28:40.171722 kubelet[3537]: E1216 12:28:40.171558 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:28:41.182006 kubelet[3537]: E1216 12:28:41.181757 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:28:43.644932 systemd[1]: Started sshd@17-172.31.28.27:22-139.178.89.65:57620.service - OpenSSH per-connection server daemon (139.178.89.65:57620). Dec 16 12:28:43.852224 sshd[5660]: Accepted publickey for core from 139.178.89.65 port 57620 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:43.856078 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:43.874682 systemd-logind[1972]: New session 18 of user core. Dec 16 12:28:43.879899 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:28:44.256604 sshd[5663]: Connection closed by 139.178.89.65 port 57620 Dec 16 12:28:44.257088 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:44.265249 systemd[1]: sshd@17-172.31.28.27:22-139.178.89.65:57620.service: Deactivated successfully. Dec 16 12:28:44.271127 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:28:44.274858 systemd-logind[1972]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:28:44.299351 systemd[1]: Started sshd@18-172.31.28.27:22-139.178.89.65:57624.service - OpenSSH per-connection server daemon (139.178.89.65:57624). Dec 16 12:28:44.304640 systemd-logind[1972]: Removed session 18. Dec 16 12:28:44.517279 sshd[5675]: Accepted publickey for core from 139.178.89.65 port 57624 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:44.520356 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:44.533522 systemd-logind[1972]: New session 19 of user core. Dec 16 12:28:44.540777 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:28:45.108952 sshd[5678]: Connection closed by 139.178.89.65 port 57624 Dec 16 12:28:45.110738 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:45.118791 systemd-logind[1972]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:28:45.120937 systemd[1]: sshd@18-172.31.28.27:22-139.178.89.65:57624.service: Deactivated successfully. Dec 16 12:28:45.130348 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:28:45.153878 systemd-logind[1972]: Removed session 19. Dec 16 12:28:45.157902 systemd[1]: Started sshd@19-172.31.28.27:22-139.178.89.65:57628.service - OpenSSH per-connection server daemon (139.178.89.65:57628). Dec 16 12:28:45.376563 sshd[5688]: Accepted publickey for core from 139.178.89.65 port 57628 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:45.380707 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:45.391720 systemd-logind[1972]: New session 20 of user core. Dec 16 12:28:45.396784 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:28:46.842721 sshd[5691]: Connection closed by 139.178.89.65 port 57628 Dec 16 12:28:46.844112 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:46.855143 systemd[1]: sshd@19-172.31.28.27:22-139.178.89.65:57628.service: Deactivated successfully. Dec 16 12:28:46.863810 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:28:46.867829 systemd-logind[1972]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:28:46.893069 systemd[1]: Started sshd@20-172.31.28.27:22-139.178.89.65:57636.service - OpenSSH per-connection server daemon (139.178.89.65:57636). Dec 16 12:28:46.898102 systemd-logind[1972]: Removed session 20. Dec 16 12:28:47.136489 sshd[5709]: Accepted publickey for core from 139.178.89.65 port 57636 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:47.137606 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:47.150569 systemd-logind[1972]: New session 21 of user core. Dec 16 12:28:47.157059 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:28:47.819734 sshd[5716]: Connection closed by 139.178.89.65 port 57636 Dec 16 12:28:47.820132 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:47.830840 systemd[1]: sshd@20-172.31.28.27:22-139.178.89.65:57636.service: Deactivated successfully. Dec 16 12:28:47.831575 systemd-logind[1972]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:28:47.841624 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:28:47.870762 systemd-logind[1972]: Removed session 21. Dec 16 12:28:47.873690 systemd[1]: Started sshd@21-172.31.28.27:22-139.178.89.65:57652.service - OpenSSH per-connection server daemon (139.178.89.65:57652). Dec 16 12:28:48.096149 sshd[5726]: Accepted publickey for core from 139.178.89.65 port 57652 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:48.098949 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:48.110271 systemd-logind[1972]: New session 22 of user core. Dec 16 12:28:48.115843 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:28:48.172550 containerd[2000]: time="2025-12-16T12:28:48.171777556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:28:48.421817 sshd[5729]: Connection closed by 139.178.89.65 port 57652 Dec 16 12:28:48.422334 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:48.433049 systemd[1]: sshd@21-172.31.28.27:22-139.178.89.65:57652.service: Deactivated successfully. Dec 16 12:28:48.442621 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:28:48.448528 systemd-logind[1972]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:28:48.451439 systemd-logind[1972]: Removed session 22. Dec 16 12:28:48.453990 containerd[2000]: time="2025-12-16T12:28:48.453922374Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:48.456362 containerd[2000]: time="2025-12-16T12:28:48.456237978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:28:48.457740 containerd[2000]: time="2025-12-16T12:28:48.456403482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:28:48.457875 kubelet[3537]: E1216 12:28:48.456671 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:28:48.457875 kubelet[3537]: E1216 12:28:48.456734 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:28:48.457875 kubelet[3537]: E1216 12:28:48.456904 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:958fd80b879b4ffea30414c254bfad02,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25hgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c4db486f6-22tfh_calico-system(75aaea04-37f8-41d2-8060-6e5472e00f96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:48.462791 containerd[2000]: time="2025-12-16T12:28:48.462727866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:28:48.747574 containerd[2000]: time="2025-12-16T12:28:48.746917099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:48.751395 containerd[2000]: time="2025-12-16T12:28:48.751290187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:28:48.751617 containerd[2000]: time="2025-12-16T12:28:48.751448479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:28:48.751859 kubelet[3537]: E1216 12:28:48.751762 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:28:48.751951 kubelet[3537]: E1216 12:28:48.751867 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:28:48.752347 kubelet[3537]: E1216 12:28:48.752252 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25hgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c4db486f6-22tfh_calico-system(75aaea04-37f8-41d2-8060-6e5472e00f96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:48.754126 kubelet[3537]: E1216 12:28:48.754029 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:28:51.174067 containerd[2000]: time="2025-12-16T12:28:51.173631871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:28:51.464237 containerd[2000]: time="2025-12-16T12:28:51.464041377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:51.468476 containerd[2000]: time="2025-12-16T12:28:51.468296565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:28:51.468476 containerd[2000]: time="2025-12-16T12:28:51.468426177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:51.470736 kubelet[3537]: E1216 12:28:51.470638 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:51.471284 kubelet[3537]: E1216 12:28:51.470735 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:51.471284 kubelet[3537]: E1216 12:28:51.471016 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvxjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-757bdf8b44-9gjd2_calico-apiserver(a301cdcf-9f24-4b62-9c32-ae5e7ca3de08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:51.472895 kubelet[3537]: E1216 12:28:51.472816 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:28:53.173972 containerd[2000]: time="2025-12-16T12:28:53.173911353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:28:53.427908 containerd[2000]: time="2025-12-16T12:28:53.427731658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:53.430819 containerd[2000]: time="2025-12-16T12:28:53.430621990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:28:53.430819 containerd[2000]: time="2025-12-16T12:28:53.430773118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:53.431493 kubelet[3537]: E1216 12:28:53.431343 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:53.432946 kubelet[3537]: E1216 12:28:53.431429 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:53.432946 kubelet[3537]: E1216 12:28:53.432352 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsrd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-757bdf8b44-h2nb9_calico-apiserver(76e6c14e-6dea-41f8-8e8a-730830194387): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:53.433277 containerd[2000]: time="2025-12-16T12:28:53.432796786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:28:53.434112 kubelet[3537]: E1216 12:28:53.433556 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:28:53.462920 systemd[1]: Started sshd@22-172.31.28.27:22-139.178.89.65:55300.service - OpenSSH per-connection server daemon (139.178.89.65:55300). Dec 16 12:28:53.675330 sshd[5749]: Accepted publickey for core from 139.178.89.65 port 55300 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:53.677945 sshd-session[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:53.683812 containerd[2000]: time="2025-12-16T12:28:53.683617308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:53.687481 containerd[2000]: time="2025-12-16T12:28:53.685820100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:28:53.688045 containerd[2000]: time="2025-12-16T12:28:53.687813492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:28:53.688486 kubelet[3537]: E1216 12:28:53.688364 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:28:53.688486 kubelet[3537]: E1216 12:28:53.688428 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:28:53.688681 systemd-logind[1972]: New session 23 of user core. Dec 16 12:28:53.692508 kubelet[3537]: E1216 12:28:53.691637 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64f7b777d7-gkwp7_calico-system(c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:53.692794 containerd[2000]: time="2025-12-16T12:28:53.692055696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:28:53.696745 kubelet[3537]: E1216 12:28:53.696533 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:28:53.698737 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:28:53.962856 containerd[2000]: time="2025-12-16T12:28:53.962698645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:53.964942 containerd[2000]: time="2025-12-16T12:28:53.964862677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:28:53.965084 containerd[2000]: time="2025-12-16T12:28:53.965000713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:28:53.967498 kubelet[3537]: E1216 12:28:53.965394 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:28:53.967498 kubelet[3537]: E1216 12:28:53.965499 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:28:53.967996 kubelet[3537]: E1216 12:28:53.967920 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr8d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:53.974347 containerd[2000]: time="2025-12-16T12:28:53.974207869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:28:54.039493 sshd[5752]: Connection closed by 139.178.89.65 port 55300 Dec 16 12:28:54.040386 sshd-session[5749]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:54.049431 systemd[1]: sshd@22-172.31.28.27:22-139.178.89.65:55300.service: Deactivated successfully. Dec 16 12:28:54.050723 systemd-logind[1972]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:28:54.056027 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:28:54.066178 systemd-logind[1972]: Removed session 23. Dec 16 12:28:54.264345 containerd[2000]: time="2025-12-16T12:28:54.263721407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:54.268833 containerd[2000]: time="2025-12-16T12:28:54.268655087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:28:54.268833 containerd[2000]: time="2025-12-16T12:28:54.268716659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:28:54.269359 kubelet[3537]: E1216 12:28:54.269251 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:28:54.269359 kubelet[3537]: E1216 12:28:54.269327 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:28:54.270755 containerd[2000]: time="2025-12-16T12:28:54.270691955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:28:54.271477 kubelet[3537]: E1216 12:28:54.270343 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr8d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:54.272989 kubelet[3537]: E1216 12:28:54.272921 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:28:54.546837 containerd[2000]: time="2025-12-16T12:28:54.546277044Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:54.549646 containerd[2000]: time="2025-12-16T12:28:54.549505020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:28:54.551687 containerd[2000]: time="2025-12-16T12:28:54.549547356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:54.551830 kubelet[3537]: E1216 12:28:54.550683 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:28:54.551830 kubelet[3537]: E1216 12:28:54.550752 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:28:54.551830 kubelet[3537]: E1216 12:28:54.550947 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jql9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nv4z4_calico-system(1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:54.553821 kubelet[3537]: E1216 12:28:54.553719 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:28:59.082531 systemd[1]: Started sshd@23-172.31.28.27:22-139.178.89.65:55306.service - OpenSSH per-connection server daemon (139.178.89.65:55306). Dec 16 12:28:59.303512 sshd[5766]: Accepted publickey for core from 139.178.89.65 port 55306 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:59.306790 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:59.331879 systemd-logind[1972]: New session 24 of user core. Dec 16 12:28:59.338139 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:28:59.661277 sshd[5769]: Connection closed by 139.178.89.65 port 55306 Dec 16 12:28:59.660047 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:59.668267 systemd[1]: sshd@23-172.31.28.27:22-139.178.89.65:55306.service: Deactivated successfully. Dec 16 12:28:59.673980 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:28:59.678000 systemd-logind[1972]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:28:59.683728 systemd-logind[1972]: Removed session 24. Dec 16 12:29:00.175092 kubelet[3537]: E1216 12:29:00.175010 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:29:04.170276 kubelet[3537]: E1216 12:29:04.170171 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:29:04.705225 systemd[1]: Started sshd@24-172.31.28.27:22-139.178.89.65:39360.service - OpenSSH per-connection server daemon (139.178.89.65:39360). Dec 16 12:29:04.923794 sshd[5781]: Accepted publickey for core from 139.178.89.65 port 39360 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:04.926956 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:04.937763 systemd-logind[1972]: New session 25 of user core. Dec 16 12:29:04.946762 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:29:05.181528 kubelet[3537]: E1216 12:29:05.180218 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:29:05.181528 kubelet[3537]: E1216 12:29:05.180995 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:29:05.266726 sshd[5784]: Connection closed by 139.178.89.65 port 39360 Dec 16 12:29:05.269786 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:05.283820 systemd-logind[1972]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:29:05.286041 systemd[1]: sshd@24-172.31.28.27:22-139.178.89.65:39360.service: Deactivated successfully. Dec 16 12:29:05.294313 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:29:05.300922 systemd-logind[1972]: Removed session 25. Dec 16 12:29:06.171151 kubelet[3537]: E1216 12:29:06.170542 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:29:08.171735 kubelet[3537]: E1216 12:29:08.171681 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:29:10.308142 systemd[1]: Started sshd@25-172.31.28.27:22-139.178.89.65:34024.service - OpenSSH per-connection server daemon (139.178.89.65:34024). Dec 16 12:29:10.537555 sshd[5822]: Accepted publickey for core from 139.178.89.65 port 34024 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:10.547980 sshd-session[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:10.567330 systemd-logind[1972]: New session 26 of user core. Dec 16 12:29:10.575836 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:29:10.894347 sshd[5825]: Connection closed by 139.178.89.65 port 34024 Dec 16 12:29:10.894765 sshd-session[5822]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:10.904201 systemd[1]: sshd@25-172.31.28.27:22-139.178.89.65:34024.service: Deactivated successfully. Dec 16 12:29:10.909832 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:29:10.913196 systemd-logind[1972]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:29:10.918700 systemd-logind[1972]: Removed session 26. Dec 16 12:29:14.172664 kubelet[3537]: E1216 12:29:14.172365 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:29:15.936495 systemd[1]: Started sshd@26-172.31.28.27:22-139.178.89.65:34036.service - OpenSSH per-connection server daemon (139.178.89.65:34036). Dec 16 12:29:16.145789 sshd[5841]: Accepted publickey for core from 139.178.89.65 port 34036 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:16.149072 sshd-session[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:16.163620 systemd-logind[1972]: New session 27 of user core. Dec 16 12:29:16.170766 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:29:16.500501 sshd[5844]: Connection closed by 139.178.89.65 port 34036 Dec 16 12:29:16.501298 sshd-session[5841]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:16.508825 systemd[1]: sshd@26-172.31.28.27:22-139.178.89.65:34036.service: Deactivated successfully. Dec 16 12:29:16.516901 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:29:16.521875 systemd-logind[1972]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:29:16.528773 systemd-logind[1972]: Removed session 27. Dec 16 12:29:17.172564 kubelet[3537]: E1216 12:29:17.171154 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:29:18.175589 kubelet[3537]: E1216 12:29:18.174775 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:29:19.184133 kubelet[3537]: E1216 12:29:19.183348 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:29:20.172143 kubelet[3537]: E1216 12:29:20.172075 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:29:21.540427 systemd[1]: Started sshd@27-172.31.28.27:22-139.178.89.65:40784.service - OpenSSH per-connection server daemon (139.178.89.65:40784). Dec 16 12:29:21.757627 sshd[5859]: Accepted publickey for core from 139.178.89.65 port 40784 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:21.762641 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:21.778973 systemd-logind[1972]: New session 28 of user core. Dec 16 12:29:21.785793 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 12:29:22.145337 sshd[5862]: Connection closed by 139.178.89.65 port 40784 Dec 16 12:29:22.147650 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:22.162163 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 12:29:22.166707 systemd[1]: sshd@27-172.31.28.27:22-139.178.89.65:40784.service: Deactivated successfully. Dec 16 12:29:22.180736 kubelet[3537]: E1216 12:29:22.180603 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:29:22.181982 systemd-logind[1972]: Session 28 logged out. Waiting for processes to exit. Dec 16 12:29:22.189370 systemd-logind[1972]: Removed session 28. Dec 16 12:29:26.171895 kubelet[3537]: E1216 12:29:26.171584 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:29:28.171215 kubelet[3537]: E1216 12:29:28.171152 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:29:31.172156 kubelet[3537]: E1216 12:29:31.172029 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:29:33.171390 kubelet[3537]: E1216 12:29:33.171018 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:29:33.171390 kubelet[3537]: E1216 12:29:33.171250 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:29:34.171304 containerd[2000]: time="2025-12-16T12:29:34.171232141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:34.452640 containerd[2000]: time="2025-12-16T12:29:34.452437874Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:34.454704 containerd[2000]: time="2025-12-16T12:29:34.454606622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:34.454987 containerd[2000]: time="2025-12-16T12:29:34.454616546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:34.455122 kubelet[3537]: E1216 12:29:34.454992 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:34.455122 kubelet[3537]: E1216 12:29:34.455058 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:34.455894 kubelet[3537]: E1216 12:29:34.455280 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvxjw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-757bdf8b44-9gjd2_calico-apiserver(a301cdcf-9f24-4b62-9c32-ae5e7ca3de08): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:34.456630 kubelet[3537]: E1216 12:29:34.456559 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:29:36.154288 systemd[1]: cri-containerd-1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192.scope: Deactivated successfully. Dec 16 12:29:36.156167 systemd[1]: cri-containerd-1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192.scope: Consumed 6.313s CPU time, 61.9M memory peak, 192K read from disk. Dec 16 12:29:36.163412 containerd[2000]: time="2025-12-16T12:29:36.163121991Z" level=info msg="received container exit event container_id:\"1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192\" id:\"1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192\" pid:3205 exit_status:1 exited_at:{seconds:1765888176 nanos:162363807}" Dec 16 12:29:36.221022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192-rootfs.mount: Deactivated successfully. Dec 16 12:29:36.408060 systemd[1]: cri-containerd-e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496.scope: Deactivated successfully. Dec 16 12:29:36.409009 systemd[1]: cri-containerd-e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496.scope: Consumed 29.273s CPU time, 120.1M memory peak. Dec 16 12:29:36.415721 containerd[2000]: time="2025-12-16T12:29:36.415647652Z" level=info msg="received container exit event container_id:\"e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496\" id:\"e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496\" pid:3865 exit_status:1 exited_at:{seconds:1765888176 nanos:415106788}" Dec 16 12:29:36.467310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496-rootfs.mount: Deactivated successfully. Dec 16 12:29:37.081927 kubelet[3537]: I1216 12:29:37.080564 3537 scope.go:117] "RemoveContainer" containerID="1e07397937860a86a37a4abd012e42d7ce2b2d16c72208d6123d1c85fe3e1192" Dec 16 12:29:37.087041 containerd[2000]: time="2025-12-16T12:29:37.086962359Z" level=info msg="CreateContainer within sandbox \"dc39f0787a4e48b9d5e382f1642b726ee8e2d2eb9f64fc43c2173d8610e2b78f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 12:29:37.087988 kubelet[3537]: I1216 12:29:37.087933 3537 scope.go:117] "RemoveContainer" containerID="e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496" Dec 16 12:29:37.092255 containerd[2000]: time="2025-12-16T12:29:37.092178231Z" level=info msg="CreateContainer within sandbox \"ff750849f53014f710ebb65ed3d9f64e4a9eabde1b69cf68629bd7055eca481d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 12:29:37.106496 containerd[2000]: time="2025-12-16T12:29:37.105440739Z" level=info msg="Container bc066d16d881876a211ad0c3d1f0ad1a4a4fc50efd82976bdd298678bd0a2bf4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:37.128930 containerd[2000]: time="2025-12-16T12:29:37.128767239Z" level=info msg="CreateContainer within sandbox \"dc39f0787a4e48b9d5e382f1642b726ee8e2d2eb9f64fc43c2173d8610e2b78f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bc066d16d881876a211ad0c3d1f0ad1a4a4fc50efd82976bdd298678bd0a2bf4\"" Dec 16 12:29:37.131485 containerd[2000]: time="2025-12-16T12:29:37.130611387Z" level=info msg="Container 445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:37.134370 containerd[2000]: time="2025-12-16T12:29:37.134289243Z" level=info msg="StartContainer for \"bc066d16d881876a211ad0c3d1f0ad1a4a4fc50efd82976bdd298678bd0a2bf4\"" Dec 16 12:29:37.139080 containerd[2000]: time="2025-12-16T12:29:37.139014027Z" level=info msg="connecting to shim bc066d16d881876a211ad0c3d1f0ad1a4a4fc50efd82976bdd298678bd0a2bf4" address="unix:///run/containerd/s/cef3dea94a30dd72bc78f6139cb67c90474ac90e013773463ace2741f6c92492" protocol=ttrpc version=3 Dec 16 12:29:37.150907 containerd[2000]: time="2025-12-16T12:29:37.150826576Z" level=info msg="CreateContainer within sandbox \"ff750849f53014f710ebb65ed3d9f64e4a9eabde1b69cf68629bd7055eca481d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422\"" Dec 16 12:29:37.152056 containerd[2000]: time="2025-12-16T12:29:37.151995064Z" level=info msg="StartContainer for \"445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422\"" Dec 16 12:29:37.154606 containerd[2000]: time="2025-12-16T12:29:37.154528852Z" level=info msg="connecting to shim 445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422" address="unix:///run/containerd/s/2662cae030359e36bacc88121a9a81a0efdb92b14003a4ab54c9e450e4cb53cd" protocol=ttrpc version=3 Dec 16 12:29:37.187769 systemd[1]: Started cri-containerd-bc066d16d881876a211ad0c3d1f0ad1a4a4fc50efd82976bdd298678bd0a2bf4.scope - libcontainer container bc066d16d881876a211ad0c3d1f0ad1a4a4fc50efd82976bdd298678bd0a2bf4. Dec 16 12:29:37.211822 systemd[1]: Started cri-containerd-445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422.scope - libcontainer container 445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422. Dec 16 12:29:37.337862 containerd[2000]: time="2025-12-16T12:29:37.337617856Z" level=info msg="StartContainer for \"445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422\" returns successfully" Dec 16 12:29:37.341442 containerd[2000]: time="2025-12-16T12:29:37.340920976Z" level=info msg="StartContainer for \"bc066d16d881876a211ad0c3d1f0ad1a4a4fc50efd82976bdd298678bd0a2bf4\" returns successfully" Dec 16 12:29:39.180054 containerd[2000]: time="2025-12-16T12:29:39.179685666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:29:39.468341 containerd[2000]: time="2025-12-16T12:29:39.468022015Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:39.470534 containerd[2000]: time="2025-12-16T12:29:39.470405299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:29:39.470899 containerd[2000]: time="2025-12-16T12:29:39.470741023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:29:39.471569 kubelet[3537]: E1216 12:29:39.471349 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:39.472812 kubelet[3537]: E1216 12:29:39.471532 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:39.472812 kubelet[3537]: E1216 12:29:39.472624 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:958fd80b879b4ffea30414c254bfad02,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25hgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c4db486f6-22tfh_calico-system(75aaea04-37f8-41d2-8060-6e5472e00f96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:39.477525 containerd[2000]: time="2025-12-16T12:29:39.477124279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:29:39.795877 containerd[2000]: time="2025-12-16T12:29:39.795318645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:39.799036 containerd[2000]: time="2025-12-16T12:29:39.798880521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:29:39.799332 containerd[2000]: time="2025-12-16T12:29:39.798945129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:39.799821 kubelet[3537]: E1216 12:29:39.799762 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:39.800103 kubelet[3537]: E1216 12:29:39.800065 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:39.800429 kubelet[3537]: E1216 12:29:39.800357 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25hgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c4db486f6-22tfh_calico-system(75aaea04-37f8-41d2-8060-6e5472e00f96): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:39.802013 kubelet[3537]: E1216 12:29:39.801901 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:29:40.997898 systemd[1]: cri-containerd-9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701.scope: Deactivated successfully. Dec 16 12:29:40.999204 systemd[1]: cri-containerd-9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701.scope: Consumed 5.251s CPU time, 21.3M memory peak, 348K read from disk. Dec 16 12:29:41.002803 containerd[2000]: time="2025-12-16T12:29:41.001675615Z" level=info msg="received container exit event container_id:\"9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701\" id:\"9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701\" pid:3185 exit_status:1 exited_at:{seconds:1765888181 nanos:1151863}" Dec 16 12:29:41.045058 kubelet[3537]: E1216 12:29:41.044301 3537 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-27?timeout=10s\": context deadline exceeded" Dec 16 12:29:41.060399 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701-rootfs.mount: Deactivated successfully. Dec 16 12:29:41.119237 kubelet[3537]: I1216 12:29:41.119193 3537 scope.go:117] "RemoveContainer" containerID="9ad5ebb6465e4108c57429c3cf3a30ce8e1d8bd4760a542c67869986139f0701" Dec 16 12:29:41.123255 containerd[2000]: time="2025-12-16T12:29:41.123195691Z" level=info msg="CreateContainer within sandbox \"cba10619cf19007fbe0659e1c15c4f1166cd5fbf190fadb0e52aa3c4ebbfc129\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 12:29:41.141168 containerd[2000]: time="2025-12-16T12:29:41.140789791Z" level=info msg="Container 1aec528736cbd2a0cd10cfdfbddd14654f27d8cdb4bb2788f734ccfe3bec6bcc: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:41.159993 containerd[2000]: time="2025-12-16T12:29:41.159907411Z" level=info msg="CreateContainer within sandbox \"cba10619cf19007fbe0659e1c15c4f1166cd5fbf190fadb0e52aa3c4ebbfc129\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1aec528736cbd2a0cd10cfdfbddd14654f27d8cdb4bb2788f734ccfe3bec6bcc\"" Dec 16 12:29:41.160952 containerd[2000]: time="2025-12-16T12:29:41.160902583Z" level=info msg="StartContainer for \"1aec528736cbd2a0cd10cfdfbddd14654f27d8cdb4bb2788f734ccfe3bec6bcc\"" Dec 16 12:29:41.163046 containerd[2000]: time="2025-12-16T12:29:41.162976363Z" level=info msg="connecting to shim 1aec528736cbd2a0cd10cfdfbddd14654f27d8cdb4bb2788f734ccfe3bec6bcc" address="unix:///run/containerd/s/91879df58b40127b7dd775a68a88da22566b1ce12605d5b810df6ef7b45c5af6" protocol=ttrpc version=3 Dec 16 12:29:41.217818 systemd[1]: Started cri-containerd-1aec528736cbd2a0cd10cfdfbddd14654f27d8cdb4bb2788f734ccfe3bec6bcc.scope - libcontainer container 1aec528736cbd2a0cd10cfdfbddd14654f27d8cdb4bb2788f734ccfe3bec6bcc. Dec 16 12:29:41.306799 containerd[2000]: time="2025-12-16T12:29:41.306150500Z" level=info msg="StartContainer for \"1aec528736cbd2a0cd10cfdfbddd14654f27d8cdb4bb2788f734ccfe3bec6bcc\" returns successfully" Dec 16 12:29:42.170420 containerd[2000]: time="2025-12-16T12:29:42.170362520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:29:42.441283 containerd[2000]: time="2025-12-16T12:29:42.441112606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:42.443543 containerd[2000]: time="2025-12-16T12:29:42.443418550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:29:42.443810 containerd[2000]: time="2025-12-16T12:29:42.443602498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:42.444141 kubelet[3537]: E1216 12:29:42.444007 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:42.444141 kubelet[3537]: E1216 12:29:42.444096 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:42.444831 kubelet[3537]: E1216 12:29:42.444390 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbpd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-64f7b777d7-gkwp7_calico-system(c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:42.445687 kubelet[3537]: E1216 12:29:42.445622 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:29:43.172713 containerd[2000]: time="2025-12-16T12:29:43.172599525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:29:43.424855 containerd[2000]: time="2025-12-16T12:29:43.424685063Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:43.427733 containerd[2000]: time="2025-12-16T12:29:43.427576763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:29:43.427733 containerd[2000]: time="2025-12-16T12:29:43.427662383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:29:43.428851 kubelet[3537]: E1216 12:29:43.428253 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:43.428851 kubelet[3537]: E1216 12:29:43.428323 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:43.429246 kubelet[3537]: E1216 12:29:43.429169 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr8d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:43.432750 containerd[2000]: time="2025-12-16T12:29:43.432676031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:29:43.721566 containerd[2000]: time="2025-12-16T12:29:43.721373316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:43.724040 containerd[2000]: time="2025-12-16T12:29:43.723900156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:29:43.724040 containerd[2000]: time="2025-12-16T12:29:43.723969240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:29:43.724689 kubelet[3537]: E1216 12:29:43.724205 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:43.724689 kubelet[3537]: E1216 12:29:43.724266 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:43.725364 kubelet[3537]: E1216 12:29:43.724436 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wr8d2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-z7gkl_calico-system(de3f24db-d343-45e7-a0cf-74925b070014): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:43.726639 kubelet[3537]: E1216 12:29:43.726562 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014" Dec 16 12:29:44.170512 containerd[2000]: time="2025-12-16T12:29:44.170333158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:44.432906 containerd[2000]: time="2025-12-16T12:29:44.432752556Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:44.435009 containerd[2000]: time="2025-12-16T12:29:44.434910576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:44.435199 containerd[2000]: time="2025-12-16T12:29:44.434924916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:44.435269 kubelet[3537]: E1216 12:29:44.435223 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:44.435357 kubelet[3537]: E1216 12:29:44.435285 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:44.435831 kubelet[3537]: E1216 12:29:44.435755 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsrd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-757bdf8b44-h2nb9_calico-apiserver(76e6c14e-6dea-41f8-8e8a-730830194387): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:44.436049 containerd[2000]: time="2025-12-16T12:29:44.436006956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:29:44.437634 kubelet[3537]: E1216 12:29:44.437575 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-h2nb9" podUID="76e6c14e-6dea-41f8-8e8a-730830194387" Dec 16 12:29:44.756401 containerd[2000]: time="2025-12-16T12:29:44.756242209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:44.758944 containerd[2000]: time="2025-12-16T12:29:44.758870701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:29:44.759246 containerd[2000]: time="2025-12-16T12:29:44.758992945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:44.759318 kubelet[3537]: E1216 12:29:44.759235 3537 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:44.759318 kubelet[3537]: E1216 12:29:44.759295 3537 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:44.759882 kubelet[3537]: E1216 12:29:44.759522 3537 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jql9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nv4z4_calico-system(1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:44.760818 kubelet[3537]: E1216 12:29:44.760768 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nv4z4" podUID="1a7a3ca4-b553-4916-9cf0-5a9aaa1485e7" Dec 16 12:29:46.170094 kubelet[3537]: E1216 12:29:46.170008 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-757bdf8b44-9gjd2" podUID="a301cdcf-9f24-4b62-9c32-ae5e7ca3de08" Dec 16 12:29:48.869214 systemd[1]: cri-containerd-445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422.scope: Deactivated successfully. Dec 16 12:29:48.872270 containerd[2000]: time="2025-12-16T12:29:48.872222538Z" level=info msg="received container exit event container_id:\"445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422\" id:\"445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422\" pid:5966 exit_status:1 exited_at:{seconds:1765888188 nanos:870055098}" Dec 16 12:29:48.913948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422-rootfs.mount: Deactivated successfully. Dec 16 12:29:49.155913 kubelet[3537]: I1216 12:29:49.155840 3537 scope.go:117] "RemoveContainer" containerID="e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496" Dec 16 12:29:49.158147 kubelet[3537]: I1216 12:29:49.157743 3537 scope.go:117] "RemoveContainer" containerID="445fac3bd33b32963817f12b3f80f7c545069f693c91ff358cfc098847e8b422" Dec 16 12:29:49.158147 kubelet[3537]: E1216 12:29:49.158046 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-b4jvt_tigera-operator(c20ba251-1a94-4344-a4e8-294dd4c4b4ea)\"" pod="tigera-operator/tigera-operator-7dcd859c48-b4jvt" podUID="c20ba251-1a94-4344-a4e8-294dd4c4b4ea" Dec 16 12:29:49.161734 containerd[2000]: time="2025-12-16T12:29:49.161652183Z" level=info msg="RemoveContainer for \"e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496\"" Dec 16 12:29:49.173032 containerd[2000]: time="2025-12-16T12:29:49.172898259Z" level=info msg="RemoveContainer for \"e05e35523943ddc4a58efe239c8f51530ca2793be2c141f3fba77ba0477c7496\" returns successfully" Dec 16 12:29:51.045631 kubelet[3537]: E1216 12:29:51.045255 3537 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-27?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:29:52.170415 kubelet[3537]: E1216 12:29:52.170347 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c4db486f6-22tfh" podUID="75aaea04-37f8-41d2-8060-6e5472e00f96" Dec 16 12:29:55.170558 kubelet[3537]: E1216 12:29:55.170362 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64f7b777d7-gkwp7" podUID="c0fcc3a9-bcc7-40dd-851f-34cdc70e8f49" Dec 16 12:29:56.171318 kubelet[3537]: E1216 12:29:56.171230 3537 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-z7gkl" podUID="de3f24db-d343-45e7-a0cf-74925b070014"