Dec 16 12:26:49.211704 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 16 12:26:49.211754 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:26:49.211780 kernel: KASLR disabled due to lack of seed Dec 16 12:26:49.211797 kernel: efi: EFI v2.7 by EDK II Dec 16 12:26:49.211814 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Dec 16 12:26:49.211830 kernel: secureboot: Secure boot disabled Dec 16 12:26:49.211849 kernel: ACPI: Early table checksum verification disabled Dec 16 12:26:49.211865 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 16 12:26:49.211881 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 16 12:26:49.211898 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 16 12:26:49.211915 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 16 12:26:49.211937 kernel: ACPI: FACS 0x0000000078630000 000040 Dec 16 12:26:49.211953 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 16 12:26:49.211970 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 16 12:26:49.211989 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 16 12:26:49.212006 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 16 12:26:49.212030 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 16 12:26:49.212048 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 16 12:26:49.212110 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 16 12:26:49.212153 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 16 12:26:49.212173 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 16 12:26:49.212190 kernel: printk: legacy bootconsole [uart0] enabled Dec 16 12:26:49.212208 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:26:49.212226 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 12:26:49.212243 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Dec 16 12:26:49.212260 kernel: Zone ranges: Dec 16 12:26:49.212277 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 16 12:26:49.212302 kernel: DMA32 empty Dec 16 12:26:49.212321 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 16 12:26:49.212339 kernel: Device empty Dec 16 12:26:49.212355 kernel: Movable zone start for each node Dec 16 12:26:49.212372 kernel: Early memory node ranges Dec 16 12:26:49.212390 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 16 12:26:49.212406 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 16 12:26:49.212422 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 16 12:26:49.212439 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 16 12:26:49.212456 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 16 12:26:49.212472 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 16 12:26:49.212488 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 16 12:26:49.212511 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 16 12:26:49.212535 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 16 12:26:49.212552 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 16 12:26:49.212570 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Dec 16 12:26:49.212587 kernel: psci: probing for conduit method from ACPI. Dec 16 12:26:49.212609 kernel: psci: PSCIv1.0 detected in firmware. Dec 16 12:26:49.212626 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:26:49.212643 kernel: psci: Trusted OS migration not required Dec 16 12:26:49.212660 kernel: psci: SMC Calling Convention v1.1 Dec 16 12:26:49.212784 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Dec 16 12:26:49.212809 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:26:49.212826 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:26:49.212844 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:26:49.212861 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:26:49.212878 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:26:49.212895 kernel: CPU features: detected: Spectre-v2 Dec 16 12:26:49.212917 kernel: CPU features: detected: Spectre-v3a Dec 16 12:26:49.212936 kernel: CPU features: detected: Spectre-BHB Dec 16 12:26:49.212953 kernel: CPU features: detected: ARM erratum 1742098 Dec 16 12:26:49.212970 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 16 12:26:49.212986 kernel: alternatives: applying boot alternatives Dec 16 12:26:49.213005 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:26:49.213023 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:26:49.213040 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:26:49.213083 kernel: Fallback order for Node 0: 0 Dec 16 12:26:49.213108 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Dec 16 12:26:49.213125 kernel: Policy zone: Normal Dec 16 12:26:49.213149 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:26:49.215851 kernel: software IO TLB: area num 2. Dec 16 12:26:49.216400 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Dec 16 12:26:49.216432 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:26:49.216451 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:26:49.216470 kernel: rcu: RCU event tracing is enabled. Dec 16 12:26:49.216490 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:26:49.216508 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:26:49.216525 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:26:49.216543 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:26:49.216560 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:26:49.216590 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:26:49.216609 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:26:49.216627 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:26:49.216645 kernel: GICv3: 96 SPIs implemented Dec 16 12:26:49.216662 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:26:49.216679 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:26:49.216696 kernel: GICv3: GICv3 features: 16 PPIs Dec 16 12:26:49.216713 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 16 12:26:49.216731 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 16 12:26:49.216748 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 16 12:26:49.216766 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Dec 16 12:26:49.216786 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Dec 16 12:26:49.216809 kernel: GICv3: using LPI property table @0x0000000400110000 Dec 16 12:26:49.216827 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 16 12:26:49.216844 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Dec 16 12:26:49.216862 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:26:49.216880 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 16 12:26:49.216898 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 16 12:26:49.216916 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 16 12:26:49.216933 kernel: Console: colour dummy device 80x25 Dec 16 12:26:49.216952 kernel: printk: legacy console [tty1] enabled Dec 16 12:26:49.216969 kernel: ACPI: Core revision 20240827 Dec 16 12:26:49.216987 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 16 12:26:49.217011 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:26:49.217029 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:26:49.217046 kernel: landlock: Up and running. Dec 16 12:26:49.217110 kernel: SELinux: Initializing. Dec 16 12:26:49.217132 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:26:49.217150 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:26:49.217168 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:26:49.217185 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:26:49.217210 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:26:49.217228 kernel: Remapping and enabling EFI services. Dec 16 12:26:49.217246 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:26:49.217263 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:26:49.217280 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 16 12:26:49.217297 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Dec 16 12:26:49.217315 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 16 12:26:49.217333 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:26:49.217350 kernel: SMP: Total of 2 processors activated. Dec 16 12:26:49.217372 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:26:49.217400 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:26:49.217419 kernel: CPU features: detected: 32-bit EL1 Support Dec 16 12:26:49.217441 kernel: CPU features: detected: CRC32 instructions Dec 16 12:26:49.217459 kernel: alternatives: applying system-wide alternatives Dec 16 12:26:49.217479 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Dec 16 12:26:49.217497 kernel: devtmpfs: initialized Dec 16 12:26:49.217515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:26:49.217537 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:26:49.217555 kernel: 16880 pages in range for non-PLT usage Dec 16 12:26:49.217573 kernel: 508400 pages in range for PLT usage Dec 16 12:26:49.217591 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:26:49.217609 kernel: SMBIOS 3.0.0 present. Dec 16 12:26:49.217627 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 16 12:26:49.217646 kernel: DMI: Memory slots populated: 0/0 Dec 16 12:26:49.217664 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:26:49.217683 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:26:49.217705 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:26:49.217724 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:26:49.217742 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:26:49.217760 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Dec 16 12:26:49.217778 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:26:49.217796 kernel: cpuidle: using governor menu Dec 16 12:26:49.217814 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:26:49.217832 kernel: ASID allocator initialised with 65536 entries Dec 16 12:26:49.217849 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:26:49.217871 kernel: Serial: AMBA PL011 UART driver Dec 16 12:26:49.217889 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:26:49.217907 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:26:49.217925 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:26:49.217944 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:26:49.217962 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:26:49.217980 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:26:49.217998 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:26:49.218016 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:26:49.218038 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:26:49.218476 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:26:49.218525 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:26:49.218547 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:26:49.218565 kernel: ACPI: Interpreter enabled Dec 16 12:26:49.218584 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:26:49.218603 kernel: ACPI: MCFG table detected, 1 entries Dec 16 12:26:49.218622 kernel: ACPI: CPU0 has been hot-added Dec 16 12:26:49.218641 kernel: ACPI: CPU1 has been hot-added Dec 16 12:26:49.218671 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Dec 16 12:26:49.219024 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 12:26:49.219283 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 12:26:49.219503 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 12:26:49.219718 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Dec 16 12:26:49.219936 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Dec 16 12:26:49.219966 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 16 12:26:49.219999 kernel: acpiphp: Slot [1] registered Dec 16 12:26:49.220019 kernel: acpiphp: Slot [2] registered Dec 16 12:26:49.220037 kernel: acpiphp: Slot [3] registered Dec 16 12:26:49.220056 kernel: acpiphp: Slot [4] registered Dec 16 12:26:49.221263 kernel: acpiphp: Slot [5] registered Dec 16 12:26:49.221283 kernel: acpiphp: Slot [6] registered Dec 16 12:26:49.221301 kernel: acpiphp: Slot [7] registered Dec 16 12:26:49.221320 kernel: acpiphp: Slot [8] registered Dec 16 12:26:49.221338 kernel: acpiphp: Slot [9] registered Dec 16 12:26:49.221356 kernel: acpiphp: Slot [10] registered Dec 16 12:26:49.221387 kernel: acpiphp: Slot [11] registered Dec 16 12:26:49.221405 kernel: acpiphp: Slot [12] registered Dec 16 12:26:49.221424 kernel: acpiphp: Slot [13] registered Dec 16 12:26:49.221443 kernel: acpiphp: Slot [14] registered Dec 16 12:26:49.221461 kernel: acpiphp: Slot [15] registered Dec 16 12:26:49.221479 kernel: acpiphp: Slot [16] registered Dec 16 12:26:49.221497 kernel: acpiphp: Slot [17] registered Dec 16 12:26:49.221515 kernel: acpiphp: Slot [18] registered Dec 16 12:26:49.221533 kernel: acpiphp: Slot [19] registered Dec 16 12:26:49.221556 kernel: acpiphp: Slot [20] registered Dec 16 12:26:49.221575 kernel: acpiphp: Slot [21] registered Dec 16 12:26:49.221593 kernel: acpiphp: Slot [22] registered Dec 16 12:26:49.221611 kernel: acpiphp: Slot [23] registered Dec 16 12:26:49.221629 kernel: acpiphp: Slot [24] registered Dec 16 12:26:49.221647 kernel: acpiphp: Slot [25] registered Dec 16 12:26:49.221665 kernel: acpiphp: Slot [26] registered Dec 16 12:26:49.221683 kernel: acpiphp: Slot [27] registered Dec 16 12:26:49.221701 kernel: acpiphp: Slot [28] registered Dec 16 12:26:49.221719 kernel: acpiphp: Slot [29] registered Dec 16 12:26:49.221743 kernel: acpiphp: Slot [30] registered Dec 16 12:26:49.221760 kernel: acpiphp: Slot [31] registered Dec 16 12:26:49.221778 kernel: PCI host bridge to bus 0000:00 Dec 16 12:26:49.222105 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 16 12:26:49.222327 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 16 12:26:49.222525 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 16 12:26:49.222718 kernel: pci_bus 0000:00: root bus resource [bus 00] Dec 16 12:26:49.222995 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Dec 16 12:26:49.223305 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Dec 16 12:26:49.223534 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Dec 16 12:26:49.223775 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Dec 16 12:26:49.223975 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Dec 16 12:26:49.224284 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 12:26:49.224548 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Dec 16 12:26:49.224761 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Dec 16 12:26:49.224965 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Dec 16 12:26:49.225239 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Dec 16 12:26:49.225456 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 16 12:26:49.225653 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 16 12:26:49.225850 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 16 12:26:49.226187 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 16 12:26:49.226229 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 16 12:26:49.226250 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 16 12:26:49.226269 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 16 12:26:49.226289 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 16 12:26:49.226309 kernel: iommu: Default domain type: Translated Dec 16 12:26:49.226328 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:26:49.226347 kernel: efivars: Registered efivars operations Dec 16 12:26:49.226366 kernel: vgaarb: loaded Dec 16 12:26:49.226395 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:26:49.226414 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:26:49.226433 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:26:49.226451 kernel: pnp: PnP ACPI init Dec 16 12:26:49.226713 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 16 12:26:49.226752 kernel: pnp: PnP ACPI: found 1 devices Dec 16 12:26:49.226772 kernel: NET: Registered PF_INET protocol family Dec 16 12:26:49.226791 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:26:49.226819 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:26:49.226837 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:26:49.226855 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:26:49.226874 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:26:49.226892 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:26:49.226911 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:26:49.226930 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:26:49.226949 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:26:49.226968 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:26:49.226991 kernel: kvm [1]: HYP mode not available Dec 16 12:26:49.227010 kernel: Initialise system trusted keyrings Dec 16 12:26:49.227028 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:26:49.227047 kernel: Key type asymmetric registered Dec 16 12:26:49.227099 kernel: Asymmetric key parser 'x509' registered Dec 16 12:26:49.227122 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:26:49.227142 kernel: io scheduler mq-deadline registered Dec 16 12:26:49.227160 kernel: io scheduler kyber registered Dec 16 12:26:49.227179 kernel: io scheduler bfq registered Dec 16 12:26:49.229671 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 16 12:26:49.229721 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 16 12:26:49.229740 kernel: ACPI: button: Power Button [PWRB] Dec 16 12:26:49.229758 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 16 12:26:49.229778 kernel: ACPI: button: Sleep Button [SLPB] Dec 16 12:26:49.229796 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:26:49.229816 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 16 12:26:49.230090 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 16 12:26:49.230140 kernel: printk: legacy console [ttyS0] disabled Dec 16 12:26:49.230161 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 16 12:26:49.230181 kernel: printk: legacy console [ttyS0] enabled Dec 16 12:26:49.230200 kernel: printk: legacy bootconsole [uart0] disabled Dec 16 12:26:49.230220 kernel: thunder_xcv, ver 1.0 Dec 16 12:26:49.230240 kernel: thunder_bgx, ver 1.0 Dec 16 12:26:49.230260 kernel: nicpf, ver 1.0 Dec 16 12:26:49.230279 kernel: nicvf, ver 1.0 Dec 16 12:26:49.230548 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:26:49.230777 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:26:48 UTC (1765888008) Dec 16 12:26:49.230807 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:26:49.230827 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Dec 16 12:26:49.230846 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:26:49.230864 kernel: watchdog: NMI not fully supported Dec 16 12:26:49.230883 kernel: Segment Routing with IPv6 Dec 16 12:26:49.230901 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:26:49.230919 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:26:49.230938 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:26:49.230965 kernel: Key type dns_resolver registered Dec 16 12:26:49.230984 kernel: registered taskstats version 1 Dec 16 12:26:49.231002 kernel: Loading compiled-in X.509 certificates Dec 16 12:26:49.231021 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:26:49.231040 kernel: Demotion targets for Node 0: null Dec 16 12:26:49.231090 kernel: Key type .fscrypt registered Dec 16 12:26:49.231118 kernel: Key type fscrypt-provisioning registered Dec 16 12:26:49.231137 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:26:49.231156 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:26:49.231183 kernel: ima: No architecture policies found Dec 16 12:26:49.231201 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:26:49.231220 kernel: clk: Disabling unused clocks Dec 16 12:26:49.231238 kernel: PM: genpd: Disabling unused power domains Dec 16 12:26:49.231256 kernel: Warning: unable to open an initial console. Dec 16 12:26:49.231274 kernel: Freeing unused kernel memory: 39552K Dec 16 12:26:49.231292 kernel: Run /init as init process Dec 16 12:26:49.231311 kernel: with arguments: Dec 16 12:26:49.231328 kernel: /init Dec 16 12:26:49.231351 kernel: with environment: Dec 16 12:26:49.231369 kernel: HOME=/ Dec 16 12:26:49.231387 kernel: TERM=linux Dec 16 12:26:49.231407 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:26:49.231433 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:26:49.231453 systemd[1]: Detected virtualization amazon. Dec 16 12:26:49.231473 systemd[1]: Detected architecture arm64. Dec 16 12:26:49.231497 systemd[1]: Running in initrd. Dec 16 12:26:49.231516 systemd[1]: No hostname configured, using default hostname. Dec 16 12:26:49.231537 systemd[1]: Hostname set to . Dec 16 12:26:49.231556 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:26:49.231575 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:26:49.231595 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:26:49.231615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:26:49.231636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:26:49.231662 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:26:49.231683 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:26:49.231704 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:26:49.231726 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:26:49.231746 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:26:49.231766 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:26:49.231786 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:26:49.231811 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:26:49.231831 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:26:49.231851 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:26:49.231871 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:26:49.231891 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:26:49.231911 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:26:49.231931 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:26:49.231951 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:26:49.231971 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:26:49.231997 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:26:49.232017 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:26:49.232037 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:26:49.232082 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:26:49.232111 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:26:49.232202 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:26:49.232224 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:26:49.232244 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:26:49.232277 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:26:49.232298 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:26:49.232319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:49.232339 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:26:49.232361 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:26:49.232387 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:26:49.232408 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:26:49.232429 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:26:49.232449 kernel: Bridge firewalling registered Dec 16 12:26:49.232469 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:26:49.232489 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:26:49.232563 systemd-journald[259]: Collecting audit messages is disabled. Dec 16 12:26:49.232615 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:26:49.232637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:26:49.232659 systemd-journald[259]: Journal started Dec 16 12:26:49.232701 systemd-journald[259]: Runtime Journal (/run/log/journal/ec267b5d1163ec2347373bc0163d6da2) is 8M, max 75.3M, 67.3M free. Dec 16 12:26:49.144080 systemd-modules-load[260]: Inserted module 'overlay' Dec 16 12:26:49.193146 systemd-modules-load[260]: Inserted module 'br_netfilter' Dec 16 12:26:49.243215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:49.247120 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:26:49.268375 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:26:49.291376 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:26:49.308211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:26:49.320832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:26:49.339970 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:26:49.352039 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:26:49.362050 systemd-tmpfiles[287]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:26:49.379173 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:26:49.390016 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:26:49.420470 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:26:49.503618 systemd-resolved[303]: Positive Trust Anchors: Dec 16 12:26:49.503648 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:26:49.503709 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:26:49.615106 kernel: SCSI subsystem initialized Dec 16 12:26:49.623101 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:26:49.636106 kernel: iscsi: registered transport (tcp) Dec 16 12:26:49.658323 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:26:49.658397 kernel: QLogic iSCSI HBA Driver Dec 16 12:26:49.695254 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:26:49.731411 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:26:49.739135 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:26:49.762765 kernel: random: crng init done Dec 16 12:26:49.762548 systemd-resolved[303]: Defaulting to hostname 'linux'. Dec 16 12:26:49.767890 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:26:49.774166 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:26:49.858204 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:26:49.865986 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:26:49.972165 kernel: raid6: neonx8 gen() 6377 MB/s Dec 16 12:26:49.990118 kernel: raid6: neonx4 gen() 6377 MB/s Dec 16 12:26:50.007127 kernel: raid6: neonx2 gen() 5280 MB/s Dec 16 12:26:50.025130 kernel: raid6: neonx1 gen() 3873 MB/s Dec 16 12:26:50.042125 kernel: raid6: int64x8 gen() 3591 MB/s Dec 16 12:26:50.059126 kernel: raid6: int64x4 gen() 3637 MB/s Dec 16 12:26:50.077131 kernel: raid6: int64x2 gen() 3522 MB/s Dec 16 12:26:50.095259 kernel: raid6: int64x1 gen() 2739 MB/s Dec 16 12:26:50.095331 kernel: raid6: using algorithm neonx8 gen() 6377 MB/s Dec 16 12:26:50.114299 kernel: raid6: .... xor() 4647 MB/s, rmw enabled Dec 16 12:26:50.114380 kernel: raid6: using neon recovery algorithm Dec 16 12:26:50.124089 kernel: xor: measuring software checksum speed Dec 16 12:26:50.124178 kernel: 8regs : 12952 MB/sec Dec 16 12:26:50.125350 kernel: 32regs : 12010 MB/sec Dec 16 12:26:50.126699 kernel: arm64_neon : 9182 MB/sec Dec 16 12:26:50.126749 kernel: xor: using function: 8regs (12952 MB/sec) Dec 16 12:26:50.221104 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:26:50.231839 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:26:50.239193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:26:50.294087 systemd-udevd[510]: Using default interface naming scheme 'v255'. Dec 16 12:26:50.306213 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:26:50.316911 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:26:50.357116 dracut-pre-trigger[514]: rd.md=0: removing MD RAID activation Dec 16 12:26:50.400872 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:26:50.403132 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:26:50.549843 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:26:50.559699 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:26:50.699243 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 16 12:26:50.699310 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 16 12:26:50.706422 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 16 12:26:50.706721 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 16 12:26:50.730116 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:f4:ce:b9:71:f5 Dec 16 12:26:50.732724 (udev-worker)[580]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:26:50.751895 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:26:50.754407 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:50.760427 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:50.770260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:26:50.777129 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:26:50.786482 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 16 12:26:50.786554 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 16 12:26:50.797114 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 16 12:26:50.809294 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 12:26:50.809384 kernel: GPT:9289727 != 33554431 Dec 16 12:26:50.809413 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 12:26:50.812391 kernel: GPT:9289727 != 33554431 Dec 16 12:26:50.812477 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 12:26:50.812506 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:50.833001 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:50.863252 kernel: nvme nvme0: using unchecked data buffer Dec 16 12:26:50.990909 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 16 12:26:51.043522 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:26:51.087631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:26:51.115318 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 16 12:26:51.140380 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 16 12:26:51.143812 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 16 12:26:51.151195 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:26:51.155739 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:26:51.164555 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:26:51.168716 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:26:51.179184 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:26:51.209112 disk-uuid[689]: Primary Header is updated. Dec 16 12:26:51.209112 disk-uuid[689]: Secondary Entries is updated. Dec 16 12:26:51.209112 disk-uuid[689]: Secondary Header is updated. Dec 16 12:26:51.226275 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:51.227511 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:26:52.251147 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 16 12:26:52.252368 disk-uuid[691]: The operation has completed successfully. Dec 16 12:26:52.465254 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:26:52.465451 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:26:52.549538 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:26:52.588399 sh[958]: Success Dec 16 12:26:52.619222 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:26:52.619353 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:26:52.619399 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:26:52.634269 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:26:52.730217 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:26:52.741993 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:26:52.772907 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:26:52.793141 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (981) Dec 16 12:26:52.798000 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:26:52.798107 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:52.917990 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 12:26:52.918089 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:26:52.919433 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:26:52.943109 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:26:52.946433 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:26:52.947445 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:26:52.959379 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:26:52.972904 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:26:53.022124 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1014) Dec 16 12:26:53.026930 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:53.027020 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:53.037780 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:26:53.037873 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:26:53.047163 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:53.049661 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:26:53.054435 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:26:53.156954 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:26:53.165725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:26:53.244530 systemd-networkd[1150]: lo: Link UP Dec 16 12:26:53.244551 systemd-networkd[1150]: lo: Gained carrier Dec 16 12:26:53.249857 systemd-networkd[1150]: Enumeration completed Dec 16 12:26:53.250025 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:26:53.252653 systemd[1]: Reached target network.target - Network. Dec 16 12:26:53.256595 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:53.256603 systemd-networkd[1150]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:26:53.268026 systemd-networkd[1150]: eth0: Link UP Dec 16 12:26:53.268322 systemd-networkd[1150]: eth0: Gained carrier Dec 16 12:26:53.268345 systemd-networkd[1150]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:26:53.286155 systemd-networkd[1150]: eth0: DHCPv4 address 172.31.24.3/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:26:53.641899 ignition[1065]: Ignition 2.22.0 Dec 16 12:26:53.641928 ignition[1065]: Stage: fetch-offline Dec 16 12:26:53.644249 ignition[1065]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:53.644273 ignition[1065]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:53.646226 ignition[1065]: Ignition finished successfully Dec 16 12:26:53.655588 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:26:53.663138 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:26:53.714477 ignition[1160]: Ignition 2.22.0 Dec 16 12:26:53.714506 ignition[1160]: Stage: fetch Dec 16 12:26:53.715036 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:53.716366 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:53.717759 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:53.733597 ignition[1160]: PUT result: OK Dec 16 12:26:53.737643 ignition[1160]: parsed url from cmdline: "" Dec 16 12:26:53.737805 ignition[1160]: no config URL provided Dec 16 12:26:53.737825 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:26:53.737853 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:26:53.738121 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:53.739966 ignition[1160]: PUT result: OK Dec 16 12:26:53.740053 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 16 12:26:53.747438 ignition[1160]: GET result: OK Dec 16 12:26:53.759887 unknown[1160]: fetched base config from "system" Dec 16 12:26:53.747694 ignition[1160]: parsing config with SHA512: 4499192aef4da5900d7651278c39c4e4c0c24504469f5ae431eff5fac1bc7a298d10b98a3c80616864a4d99aab0788e60786a5a3ab4a114b03b00958b5ef367f Dec 16 12:26:53.759904 unknown[1160]: fetched base config from "system" Dec 16 12:26:53.760860 ignition[1160]: fetch: fetch complete Dec 16 12:26:53.759928 unknown[1160]: fetched user config from "aws" Dec 16 12:26:53.760873 ignition[1160]: fetch: fetch passed Dec 16 12:26:53.769171 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:26:53.760971 ignition[1160]: Ignition finished successfully Dec 16 12:26:53.774403 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:26:53.838154 ignition[1166]: Ignition 2.22.0 Dec 16 12:26:53.838187 ignition[1166]: Stage: kargs Dec 16 12:26:53.838778 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:53.838811 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:53.838991 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:53.841812 ignition[1166]: PUT result: OK Dec 16 12:26:53.850003 ignition[1166]: kargs: kargs passed Dec 16 12:26:53.850123 ignition[1166]: Ignition finished successfully Dec 16 12:26:53.857953 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:26:53.867203 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:26:53.917365 ignition[1173]: Ignition 2.22.0 Dec 16 12:26:53.917397 ignition[1173]: Stage: disks Dec 16 12:26:53.917917 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:53.917946 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:53.918146 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:53.921824 ignition[1173]: PUT result: OK Dec 16 12:26:53.933211 ignition[1173]: disks: disks passed Dec 16 12:26:53.933351 ignition[1173]: Ignition finished successfully Dec 16 12:26:53.936006 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:26:53.939168 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:26:53.940474 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:26:53.947955 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:26:53.952882 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:26:53.955711 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:26:53.968558 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:26:54.025627 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 16 12:26:54.032704 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:26:54.040500 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:26:54.178090 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:26:54.179948 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:26:54.184275 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:26:54.193202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:26:54.200416 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:26:54.205576 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 12:26:54.205661 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:26:54.205710 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:26:54.239505 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:26:54.248188 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Dec 16 12:26:54.248229 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:54.248265 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:54.249367 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:26:54.259146 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:26:54.259221 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:26:54.263244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:26:54.474272 systemd-networkd[1150]: eth0: Gained IPv6LL Dec 16 12:26:54.593691 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:26:54.616765 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:26:54.637616 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:26:54.648115 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:26:54.948026 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:26:54.955716 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:26:54.960347 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:26:54.992751 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:26:54.996015 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:55.025756 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:26:55.048133 ignition[1314]: INFO : Ignition 2.22.0 Dec 16 12:26:55.048133 ignition[1314]: INFO : Stage: mount Dec 16 12:26:55.051933 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:55.051933 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:55.051933 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:55.060648 ignition[1314]: INFO : PUT result: OK Dec 16 12:26:55.064462 ignition[1314]: INFO : mount: mount passed Dec 16 12:26:55.066308 ignition[1314]: INFO : Ignition finished successfully Dec 16 12:26:55.070721 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:26:55.075424 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:26:55.183478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:26:55.234112 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1325) Dec 16 12:26:55.239576 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:26:55.239813 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:26:55.247980 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 16 12:26:55.248099 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Dec 16 12:26:55.251676 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:26:55.310641 ignition[1342]: INFO : Ignition 2.22.0 Dec 16 12:26:55.310641 ignition[1342]: INFO : Stage: files Dec 16 12:26:55.314731 ignition[1342]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:55.314731 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:55.320025 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:55.323851 ignition[1342]: INFO : PUT result: OK Dec 16 12:26:55.329020 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:26:55.332240 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:26:55.332240 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:26:55.343449 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:26:55.346998 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:26:55.350226 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:26:55.347536 unknown[1342]: wrote ssh authorized keys file for user: core Dec 16 12:26:55.364945 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:26:55.364945 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:26:55.483093 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:26:55.681916 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:26:55.686687 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:26:55.690987 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:26:55.695305 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:26:55.699441 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:26:55.699441 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:26:55.707923 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:26:55.712107 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:26:55.716244 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:26:55.725555 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:26:55.730233 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:26:55.734944 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:26:55.742962 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:26:55.742962 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:26:55.742962 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 12:26:56.182158 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:26:56.609690 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:26:56.609690 ignition[1342]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:26:56.617634 ignition[1342]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:26:56.627435 ignition[1342]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:26:56.627435 ignition[1342]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:26:56.627435 ignition[1342]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:26:56.638641 ignition[1342]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:26:56.638641 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:26:56.638641 ignition[1342]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:26:56.638641 ignition[1342]: INFO : files: files passed Dec 16 12:26:56.638641 ignition[1342]: INFO : Ignition finished successfully Dec 16 12:26:56.652935 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:26:56.662213 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:26:56.667145 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:26:56.691026 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:26:56.691491 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:26:56.712356 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:26:56.716333 initrd-setup-root-after-ignition[1372]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:26:56.716333 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:26:56.724338 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:26:56.730995 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:26:56.737483 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:26:56.841411 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:26:56.844714 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:26:56.848698 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:26:56.856222 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:26:56.858975 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:26:56.860648 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:26:56.901421 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:26:56.910253 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:26:56.944808 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:26:56.945232 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:26:56.953106 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:26:56.955997 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:26:56.956324 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:26:56.963589 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:26:56.971143 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:26:56.977800 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:26:56.980641 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:26:56.988979 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:26:56.991883 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:26:56.999505 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:26:57.002744 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:26:57.010740 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:26:57.015603 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:26:57.020778 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:26:57.024891 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:26:57.025408 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:26:57.032501 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:26:57.037911 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:26:57.040855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:26:57.045709 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:26:57.048751 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:26:57.049403 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:26:57.058860 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:26:57.059155 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:26:57.063046 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:26:57.063280 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:26:57.074917 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:26:57.078459 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:26:57.078740 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:26:57.092831 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:26:57.097453 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:26:57.097743 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:26:57.108021 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:26:57.111403 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:26:57.132683 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:26:57.135163 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:26:57.150821 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:26:57.160570 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:26:57.163361 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:26:57.174364 ignition[1396]: INFO : Ignition 2.22.0 Dec 16 12:26:57.176546 ignition[1396]: INFO : Stage: umount Dec 16 12:26:57.176546 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:26:57.176546 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 16 12:26:57.176546 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 16 12:26:57.188344 ignition[1396]: INFO : PUT result: OK Dec 16 12:26:57.198597 ignition[1396]: INFO : umount: umount passed Dec 16 12:26:57.202954 ignition[1396]: INFO : Ignition finished successfully Dec 16 12:26:57.208617 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:26:57.211223 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:26:57.214764 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:26:57.214920 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:26:57.223165 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:26:57.223280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:26:57.229669 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:26:57.229772 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:26:57.234360 systemd[1]: Stopped target network.target - Network. Dec 16 12:26:57.236495 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:26:57.236608 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:26:57.239493 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:26:57.241558 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:26:57.241758 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:26:57.246694 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:26:57.248779 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:26:57.251186 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:26:57.251271 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:26:57.260871 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:26:57.260960 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:26:57.265024 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:26:57.265689 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:26:57.269540 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:26:57.269634 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:26:57.272980 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:26:57.273112 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:26:57.278293 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:26:57.285217 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:26:57.310631 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:26:57.310849 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:26:57.321567 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:26:57.322154 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:26:57.322246 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:26:57.338867 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:26:57.345226 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:26:57.345449 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:26:57.380344 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:26:57.382107 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:26:57.385810 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:26:57.385907 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:26:57.396439 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:26:57.401824 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:26:57.404207 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:26:57.413823 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:26:57.413991 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:26:57.433347 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:26:57.433459 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:26:57.436758 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:26:57.458835 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:26:57.478816 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:26:57.483212 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:26:57.488399 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:26:57.488590 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:26:57.495247 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:26:57.495329 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:26:57.498285 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:26:57.498420 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:26:57.502848 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:26:57.502975 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:26:57.512279 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:26:57.512408 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:26:57.529723 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:26:57.532437 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:26:57.532568 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:26:57.544238 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:26:57.544360 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:26:57.556788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:26:57.556903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:26:57.571913 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:26:57.577656 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:26:57.590232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:26:57.595394 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:26:57.600443 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:26:57.616641 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:26:57.645519 systemd[1]: Switching root. Dec 16 12:26:57.703167 systemd-journald[259]: Journal stopped Dec 16 12:27:00.116429 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Dec 16 12:27:00.116566 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:27:00.116610 kernel: SELinux: policy capability open_perms=1 Dec 16 12:27:00.116639 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:27:00.116668 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:27:00.116697 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:27:00.116729 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:27:00.116759 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:27:00.116788 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:27:00.116829 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:27:00.116858 kernel: audit: type=1403 audit(1765888018.124:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:27:00.116890 systemd[1]: Successfully loaded SELinux policy in 99.310ms. Dec 16 12:27:00.116942 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.638ms. Dec 16 12:27:00.116975 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:27:00.117008 systemd[1]: Detected virtualization amazon. Dec 16 12:27:00.117040 systemd[1]: Detected architecture arm64. Dec 16 12:27:00.117104 systemd[1]: Detected first boot. Dec 16 12:27:00.117139 systemd[1]: Initializing machine ID from VM UUID. Dec 16 12:27:00.117192 zram_generator::config[1442]: No configuration found. Dec 16 12:27:00.117228 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:27:00.117258 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:27:00.117290 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:27:00.117322 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:27:00.117355 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:27:00.117386 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:27:00.117458 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:27:00.117501 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:27:00.117535 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:27:00.117568 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:27:00.125872 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:27:00.125918 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:27:00.125952 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:27:00.125984 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:27:00.126014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:27:00.126045 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:27:00.126135 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:27:00.126169 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:27:00.126201 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:27:00.126233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:27:00.126263 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 12:27:00.126295 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:27:00.126327 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:27:00.126357 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:27:00.126391 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:27:00.126419 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:27:00.126447 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:27:00.126476 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:27:00.126507 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:27:00.126537 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:27:00.126567 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:27:00.126596 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:27:00.126630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:27:00.126666 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:27:00.126694 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:27:00.126723 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:27:00.126754 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:27:00.126783 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:27:00.126813 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:27:00.126844 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:27:00.126873 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:27:00.126904 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:27:00.126940 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:27:00.126971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:27:00.127003 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:27:00.127036 systemd[1]: Reached target machines.target - Containers. Dec 16 12:27:00.127103 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:27:00.127140 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:27:00.127170 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:27:00.127200 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:27:00.127235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:27:00.127264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:27:00.127292 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:27:00.127325 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:27:00.127355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:27:00.127385 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:27:00.127417 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:27:00.127448 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:27:00.127482 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:27:00.127512 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:27:00.127541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:27:00.127573 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:27:00.127602 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:27:00.127635 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:27:00.127668 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:27:00.127697 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:27:00.127729 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:27:00.127758 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:27:00.127789 systemd[1]: Stopped verity-setup.service. Dec 16 12:27:00.127818 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:27:00.127847 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:27:00.127876 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:27:00.127914 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:27:00.127949 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:27:00.127982 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:27:00.128010 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:27:00.128039 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:27:00.128117 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:27:00.128160 kernel: loop: module loaded Dec 16 12:27:00.128191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:27:00.128287 systemd-journald[1521]: Collecting audit messages is disabled. Dec 16 12:27:00.128350 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:27:00.128380 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:27:00.128409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:27:00.128437 systemd-journald[1521]: Journal started Dec 16 12:27:00.128489 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec267b5d1163ec2347373bc0163d6da2) is 8M, max 75.3M, 67.3M free. Dec 16 12:26:59.553960 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:26:59.571048 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 16 12:26:59.572136 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:27:00.141144 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:27:00.143482 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:27:00.143908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:27:00.169099 kernel: fuse: init (API version 7.41) Dec 16 12:27:00.186238 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:27:00.189229 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:27:00.192886 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:27:00.198491 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:27:00.221938 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:27:00.230230 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:27:00.233030 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:27:00.233149 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:27:00.240478 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:27:00.249690 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:27:00.250679 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:27:00.270410 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:27:00.277543 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:27:00.280421 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:27:00.287536 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:27:00.290360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:27:00.300397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:27:00.326638 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:27:00.335794 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:27:00.339488 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:27:00.342677 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:27:00.355443 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:27:00.360180 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec267b5d1163ec2347373bc0163d6da2 is 56.756ms for 915 entries. Dec 16 12:27:00.360180 systemd-journald[1521]: System Journal (/var/log/journal/ec267b5d1163ec2347373bc0163d6da2) is 8M, max 195.6M, 187.6M free. Dec 16 12:27:00.433471 systemd-journald[1521]: Received client request to flush runtime journal. Dec 16 12:27:00.433570 kernel: ACPI: bus type drm_connector registered Dec 16 12:27:00.381598 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:27:00.399938 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:27:00.404244 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:27:00.416181 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:27:00.427631 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:27:00.440697 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:27:00.453515 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:27:00.456699 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:27:00.465030 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:27:00.470216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:27:00.505118 kernel: loop0: detected capacity change from 0 to 61264 Dec 16 12:27:00.557254 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:27:00.573230 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:27:00.579519 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:27:00.612401 kernel: loop1: detected capacity change from 0 to 100632 Dec 16 12:27:00.625559 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:27:00.633997 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:27:00.721521 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:27:00.733485 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Dec 16 12:27:00.733528 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. Dec 16 12:27:00.746160 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:27:00.758117 kernel: loop2: detected capacity change from 0 to 200800 Dec 16 12:27:00.822106 kernel: loop3: detected capacity change from 0 to 119840 Dec 16 12:27:00.951125 kernel: loop4: detected capacity change from 0 to 61264 Dec 16 12:27:00.998844 kernel: loop5: detected capacity change from 0 to 100632 Dec 16 12:27:01.014798 kernel: loop6: detected capacity change from 0 to 200800 Dec 16 12:27:01.050918 kernel: loop7: detected capacity change from 0 to 119840 Dec 16 12:27:01.071974 (sd-merge)[1600]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 16 12:27:01.074230 (sd-merge)[1600]: Merged extensions into '/usr'. Dec 16 12:27:01.088281 systemd[1]: Reload requested from client PID 1567 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:27:01.088323 systemd[1]: Reloading... Dec 16 12:27:01.305100 zram_generator::config[1629]: No configuration found. Dec 16 12:27:01.827131 systemd[1]: Reloading finished in 737 ms. Dec 16 12:27:01.857196 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:27:01.864597 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:27:01.880660 systemd[1]: Starting ensure-sysext.service... Dec 16 12:27:01.890319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:27:01.898490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:27:01.926373 systemd[1]: Reload requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:27:01.926398 systemd[1]: Reloading... Dec 16 12:27:01.990352 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:27:01.990437 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:27:01.993188 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:27:01.993837 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:27:02.003134 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:27:02.003868 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Dec 16 12:27:02.004036 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Dec 16 12:27:02.022766 systemd-udevd[1680]: Using default interface naming scheme 'v255'. Dec 16 12:27:02.034660 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:27:02.034694 systemd-tmpfiles[1679]: Skipping /boot Dec 16 12:27:02.105469 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:27:02.105665 systemd-tmpfiles[1679]: Skipping /boot Dec 16 12:27:02.162108 zram_generator::config[1710]: No configuration found. Dec 16 12:27:02.206130 ldconfig[1561]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:27:02.632310 (udev-worker)[1784]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:27:02.814796 systemd[1]: Reloading finished in 887 ms. Dec 16 12:27:02.832553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:27:02.837001 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:27:02.876948 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:27:02.927747 systemd[1]: Finished ensure-sysext.service. Dec 16 12:27:02.953650 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 12:27:02.964370 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:27:02.971699 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:27:02.974913 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:27:02.980644 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:27:02.990804 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:27:02.995482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:27:03.000653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:27:03.003489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:27:03.003598 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:27:03.048446 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:27:03.055908 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:27:03.067172 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:27:03.072348 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:27:03.083947 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:27:03.133977 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:27:03.154700 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:27:03.156288 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:27:03.176154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:27:03.176694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:27:03.179982 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:27:03.198569 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:27:03.229848 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:27:03.231446 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:27:03.246937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:27:03.247835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:27:03.251246 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:27:03.259340 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:27:03.264707 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:27:03.326082 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:27:03.332973 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:27:03.383980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 16 12:27:03.390595 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:27:03.429769 augenrules[1931]: No rules Dec 16 12:27:03.433364 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:27:03.433969 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:27:03.444843 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:27:03.513016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:27:03.574684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:27:03.700195 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:27:03.796192 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:27:03.885493 systemd-networkd[1888]: lo: Link UP Dec 16 12:27:03.885509 systemd-networkd[1888]: lo: Gained carrier Dec 16 12:27:03.889508 systemd-networkd[1888]: Enumeration completed Dec 16 12:27:03.889836 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:27:03.892215 systemd-resolved[1889]: Positive Trust Anchors: Dec 16 12:27:03.892242 systemd-resolved[1889]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:27:03.892305 systemd-resolved[1889]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:27:03.900250 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:27:03.905957 systemd-networkd[1888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:03.905987 systemd-networkd[1888]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:27:03.906326 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:27:03.919526 systemd-networkd[1888]: eth0: Link UP Dec 16 12:27:03.919861 systemd-networkd[1888]: eth0: Gained carrier Dec 16 12:27:03.919911 systemd-networkd[1888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:27:03.922619 systemd-resolved[1889]: Defaulting to hostname 'linux'. Dec 16 12:27:03.929372 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:27:03.931522 systemd-networkd[1888]: eth0: DHCPv4 address 172.31.24.3/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 16 12:27:03.932320 systemd[1]: Reached target network.target - Network. Dec 16 12:27:03.934547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:27:03.939193 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:27:03.944452 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:27:03.947539 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:27:03.950942 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:27:03.954018 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:27:03.957346 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:27:03.960418 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:27:03.960482 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:27:03.962712 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:27:03.966790 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:27:03.972675 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:27:03.979548 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:27:03.982987 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:27:03.986316 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:27:03.992967 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:27:03.996147 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:27:04.000354 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:27:04.004295 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:27:04.006681 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:27:04.009372 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:27:04.009429 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:27:04.012585 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:27:04.020356 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:27:04.032345 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:27:04.040465 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:27:04.046180 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:27:04.050985 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:27:04.053460 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:27:04.062490 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:27:04.070867 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 12:27:04.079294 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:27:04.083799 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 16 12:27:04.097587 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:27:04.112357 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:27:04.123818 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:27:04.128784 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:27:04.129730 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:27:04.137488 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:27:04.145047 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:27:04.151626 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:27:04.163193 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:27:04.175828 jq[1965]: false Dec 16 12:27:04.182204 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:27:04.182767 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:27:04.214657 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:27:04.216252 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:27:04.229662 extend-filesystems[1966]: Found /dev/nvme0n1p6 Dec 16 12:27:04.248421 extend-filesystems[1966]: Found /dev/nvme0n1p9 Dec 16 12:27:04.259461 extend-filesystems[1966]: Checking size of /dev/nvme0n1p9 Dec 16 12:27:04.281494 jq[1975]: true Dec 16 12:27:04.349439 extend-filesystems[1966]: Resized partition /dev/nvme0n1p9 Dec 16 12:27:04.372609 (ntainerd)[2004]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:27:04.377608 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:27:04.377608 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:27:04.377608 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: ---------------------------------------------------- Dec 16 12:27:04.377608 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:27:04.377608 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:27:04.377608 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: corporation. Support and training for ntp-4 are Dec 16 12:27:04.377608 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: available at https://www.nwtime.org/support Dec 16 12:27:04.377608 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: ---------------------------------------------------- Dec 16 12:27:04.376646 ntpd[1968]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:27:04.378666 extend-filesystems[2015]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 12:27:04.420986 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Dec 16 12:27:04.376760 ntpd[1968]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:27:04.421238 jq[2003]: true Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: proto: precision = 0.108 usec (-23) Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: basedate set to 2025-11-30 Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: gps base set to 2025-11-30 (week 2395) Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: Listen normally on 3 eth0 172.31.24.3:123 Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: Listen normally on 4 lo [::1]:123 Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: bind(21) AF_INET6 [fe80::4f4:ceff:feb9:71f5%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 12:27:04.421596 ntpd[1968]: 16 Dec 12:27:04 ntpd[1968]: unable to create socket on eth0 (5) for [fe80::4f4:ceff:feb9:71f5%2]:123 Dec 16 12:27:04.395680 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:27:04.376781 ntpd[1968]: ---------------------------------------------------- Dec 16 12:27:04.410994 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:27:04.376798 ntpd[1968]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:27:04.414704 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:27:04.376815 ntpd[1968]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:27:04.416748 systemd-coredump[2017]: Process 1968 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Dec 16 12:27:04.376834 ntpd[1968]: corporation. Support and training for ntp-4 are Dec 16 12:27:04.441823 tar[1979]: linux-arm64/LICENSE Dec 16 12:27:04.441823 tar[1979]: linux-arm64/helm Dec 16 12:27:04.439105 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:27:04.376852 ntpd[1968]: available at https://www.nwtime.org/support Dec 16 12:27:04.443758 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Dec 16 12:27:04.376868 ntpd[1968]: ---------------------------------------------------- Dec 16 12:27:04.446944 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:27:04.391666 ntpd[1968]: proto: precision = 0.108 usec (-23) Dec 16 12:27:04.446999 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:27:04.393481 dbus-daemon[1963]: [system] SELinux support is enabled Dec 16 12:27:04.452304 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:27:04.397492 ntpd[1968]: basedate set to 2025-11-30 Dec 16 12:27:04.452344 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:27:04.397533 ntpd[1968]: gps base set to 2025-11-30 (week 2395) Dec 16 12:27:04.397745 ntpd[1968]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:27:04.397805 ntpd[1968]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:27:04.398193 ntpd[1968]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:27:04.398247 ntpd[1968]: Listen normally on 3 eth0 172.31.24.3:123 Dec 16 12:27:04.398298 ntpd[1968]: Listen normally on 4 lo [::1]:123 Dec 16 12:27:04.398349 ntpd[1968]: bind(21) AF_INET6 [fe80::4f4:ceff:feb9:71f5%2]:123 flags 0x811 failed: Cannot assign requested address Dec 16 12:27:04.398391 ntpd[1968]: unable to create socket on eth0 (5) for [fe80::4f4:ceff:feb9:71f5%2]:123 Dec 16 12:27:04.461880 dbus-daemon[1963]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1888 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 12:27:04.463932 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:27:04.466005 systemd[1]: Started systemd-coredump@0-2017-0.service - Process Core Dump (PID 2017/UID 0). Dec 16 12:27:04.484979 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 12:27:04.497244 update_engine[1974]: I20251216 12:27:04.495096 1974 main.cc:92] Flatcar Update Engine starting Dec 16 12:27:04.510730 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:27:04.519202 update_engine[1974]: I20251216 12:27:04.510632 1974 update_check_scheduler.cc:74] Next update check in 4m48s Dec 16 12:27:04.546217 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:27:04.560680 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 16 12:27:04.652868 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Dec 16 12:27:04.666319 coreos-metadata[1962]: Dec 16 12:27:04.663 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:27:04.670051 extend-filesystems[2015]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 16 12:27:04.670051 extend-filesystems[2015]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 16 12:27:04.670051 extend-filesystems[2015]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Dec 16 12:27:04.683586 extend-filesystems[1966]: Resized filesystem in /dev/nvme0n1p9 Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.680 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.680 INFO Fetch successful Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.680 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.681 INFO Fetch successful Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.681 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.687 INFO Fetch successful Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.687 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.697 INFO Fetch successful Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.697 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.698 INFO Fetch failed with 404: resource not found Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.698 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.699 INFO Fetch successful Dec 16 12:27:04.706150 coreos-metadata[1962]: Dec 16 12:27:04.699 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 16 12:27:04.673585 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:27:04.706893 bash[2047]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:27:04.715283 coreos-metadata[1962]: Dec 16 12:27:04.706 INFO Fetch successful Dec 16 12:27:04.715283 coreos-metadata[1962]: Dec 16 12:27:04.706 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 16 12:27:04.715283 coreos-metadata[1962]: Dec 16 12:27:04.715 INFO Fetch successful Dec 16 12:27:04.715283 coreos-metadata[1962]: Dec 16 12:27:04.715 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 16 12:27:04.715283 coreos-metadata[1962]: Dec 16 12:27:04.715 INFO Fetch successful Dec 16 12:27:04.715283 coreos-metadata[1962]: Dec 16 12:27:04.715 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 16 12:27:04.674036 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:27:04.709113 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:27:04.718037 systemd[1]: Starting sshkeys.service... Dec 16 12:27:04.725194 coreos-metadata[1962]: Dec 16 12:27:04.724 INFO Fetch successful Dec 16 12:27:04.895970 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 12:27:04.903506 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 12:27:04.916356 systemd-logind[1973]: Watching system buttons on /dev/input/event0 (Power Button) Dec 16 12:27:04.916406 systemd-logind[1973]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 16 12:27:04.926320 systemd-logind[1973]: New seat seat0. Dec 16 12:27:04.942419 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:27:05.123153 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:27:05.127690 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:27:05.215835 coreos-metadata[2088]: Dec 16 12:27:05.215 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 16 12:27:05.215835 coreos-metadata[2088]: Dec 16 12:27:05.215 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 16 12:27:05.215835 coreos-metadata[2088]: Dec 16 12:27:05.215 INFO Fetch successful Dec 16 12:27:05.215835 coreos-metadata[2088]: Dec 16 12:27:05.215 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 12:27:05.215835 coreos-metadata[2088]: Dec 16 12:27:05.215 INFO Fetch successful Dec 16 12:27:05.217304 unknown[2088]: wrote ssh authorized keys file for user: core Dec 16 12:27:05.292181 systemd-networkd[1888]: eth0: Gained IPv6LL Dec 16 12:27:05.303800 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:27:05.307826 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:27:05.313728 update-ssh-keys[2139]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:27:05.317718 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 16 12:27:05.330469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:05.337477 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:27:05.342417 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 12:27:05.363535 systemd[1]: Finished sshkeys.service. Dec 16 12:27:05.413905 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 12:27:05.438690 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 12:27:05.457704 dbus-daemon[1963]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2025 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 12:27:05.473764 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 12:27:05.577316 containerd[2004]: time="2025-12-16T12:27:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:27:05.578768 containerd[2004]: time="2025-12-16T12:27:05.578709384Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:27:05.623818 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:27:05.647092 amazon-ssm-agent[2145]: Initializing new seelog logger Dec 16 12:27:05.655179 systemd-coredump[2023]: Process 1968 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1968: #0 0x0000aaaae9290b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaae923fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaae9240240 n/a (ntpd + 0x10240) #3 0x0000aaaae923be14 n/a (ntpd + 0xbe14) #4 0x0000aaaae923d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaae9245a38 n/a (ntpd + 0x15a38) #6 0x0000aaaae923738c n/a (ntpd + 0x738c) #7 0x0000ffff85552034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff85552118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaae92373f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Dec 16 12:27:05.661362 amazon-ssm-agent[2145]: New Seelog Logger Creation Complete Dec 16 12:27:05.661362 amazon-ssm-agent[2145]: 2025/12/16 12:27:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:05.661362 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:05.668522 amazon-ssm-agent[2145]: 2025/12/16 12:27:05 processing appconfig overrides Dec 16 12:27:05.668522 amazon-ssm-agent[2145]: 2025/12/16 12:27:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:05.668522 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:05.669414 amazon-ssm-agent[2145]: 2025-12-16 12:27:05.6649 INFO Proxy environment variables: Dec 16 12:27:05.673787 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Dec 16 12:27:05.674279 systemd[1]: ntpd.service: Failed with result 'core-dump'. Dec 16 12:27:05.680551 amazon-ssm-agent[2145]: 2025/12/16 12:27:05 processing appconfig overrides Dec 16 12:27:05.679824 systemd[1]: systemd-coredump@0-2017-0.service: Deactivated successfully. Dec 16 12:27:05.682915 amazon-ssm-agent[2145]: 2025/12/16 12:27:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:05.682915 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:05.691204 amazon-ssm-agent[2145]: 2025/12/16 12:27:05 processing appconfig overrides Dec 16 12:27:05.708107 containerd[2004]: time="2025-12-16T12:27:05.706235713Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.156µs" Dec 16 12:27:05.708107 containerd[2004]: time="2025-12-16T12:27:05.706300897Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:27:05.708107 containerd[2004]: time="2025-12-16T12:27:05.706341637Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:27:05.708107 containerd[2004]: time="2025-12-16T12:27:05.706721773Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:27:05.708107 containerd[2004]: time="2025-12-16T12:27:05.706764517Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:27:05.708107 containerd[2004]: time="2025-12-16T12:27:05.706819837Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:27:05.708107 containerd[2004]: time="2025-12-16T12:27:05.707228113Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:27:05.708107 containerd[2004]: time="2025-12-16T12:27:05.707281813Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:27:05.712967 containerd[2004]: time="2025-12-16T12:27:05.709512709Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:27:05.712967 containerd[2004]: time="2025-12-16T12:27:05.709594729Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:27:05.712967 containerd[2004]: time="2025-12-16T12:27:05.709629925Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:27:05.712967 containerd[2004]: time="2025-12-16T12:27:05.709676737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:27:05.712967 containerd[2004]: time="2025-12-16T12:27:05.709995877Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:27:05.712967 containerd[2004]: time="2025-12-16T12:27:05.712216537Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:27:05.712967 containerd[2004]: time="2025-12-16T12:27:05.712399681Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:27:05.712967 containerd[2004]: time="2025-12-16T12:27:05.712509037Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:27:05.712967 containerd[2004]: time="2025-12-16T12:27:05.712623793Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:27:05.713884 containerd[2004]: time="2025-12-16T12:27:05.713493469Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:27:05.717194 containerd[2004]: time="2025-12-16T12:27:05.714401341Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:27:05.717603 amazon-ssm-agent[2145]: 2025/12/16 12:27:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:05.719286 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:05.720766 amazon-ssm-agent[2145]: 2025/12/16 12:27:05 processing appconfig overrides Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.723109417Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.723236341Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724144441Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724202497Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724236409Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724264765Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724298509Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724343929Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724377325Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724405213Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724429105Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724462201Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724692901Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:27:05.730131 containerd[2004]: time="2025-12-16T12:27:05.724735225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.724781929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.724815589Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.724842937Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.724868365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.724895053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.724919977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.724948537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.724974469Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.725000581Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.726479401Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.726539977Z" level=info msg="Start snapshots syncer" Dec 16 12:27:05.730759 containerd[2004]: time="2025-12-16T12:27:05.727683037Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:27:05.732356 containerd[2004]: time="2025-12-16T12:27:05.729525949Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:27:05.732356 containerd[2004]: time="2025-12-16T12:27:05.729639217Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731316637Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731582593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731628433Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731659321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731685973Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731714557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731740789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731767597Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731825089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731854069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731883409Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731939557Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731970721Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:27:05.732542 containerd[2004]: time="2025-12-16T12:27:05.731993101Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:27:05.740147 containerd[2004]: time="2025-12-16T12:27:05.732017785Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:27:05.740147 containerd[2004]: time="2025-12-16T12:27:05.734480689Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:27:05.740147 containerd[2004]: time="2025-12-16T12:27:05.734543941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:27:05.740147 containerd[2004]: time="2025-12-16T12:27:05.734582473Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:27:05.740147 containerd[2004]: time="2025-12-16T12:27:05.734764801Z" level=info msg="runtime interface created" Dec 16 12:27:05.740147 containerd[2004]: time="2025-12-16T12:27:05.734785345Z" level=info msg="created NRI interface" Dec 16 12:27:05.740147 containerd[2004]: time="2025-12-16T12:27:05.734808649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:27:05.740147 containerd[2004]: time="2025-12-16T12:27:05.734848741Z" level=info msg="Connect containerd service" Dec 16 12:27:05.740147 containerd[2004]: time="2025-12-16T12:27:05.734908873Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:27:05.743729 containerd[2004]: time="2025-12-16T12:27:05.742508449Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:27:05.772201 amazon-ssm-agent[2145]: 2025-12-16 12:27:05.6649 INFO https_proxy: Dec 16 12:27:05.863664 locksmithd[2027]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:27:05.876244 amazon-ssm-agent[2145]: 2025-12-16 12:27:05.6649 INFO http_proxy: Dec 16 12:27:05.981362 amazon-ssm-agent[2145]: 2025-12-16 12:27:05.6649 INFO no_proxy: Dec 16 12:27:06.024733 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Dec 16 12:27:06.029536 systemd[1]: Started ntpd.service - Network Time Service. Dec 16 12:27:06.086356 amazon-ssm-agent[2145]: 2025-12-16 12:27:05.6749 INFO Checking if agent identity type OnPrem can be assumed Dec 16 12:27:06.101960 containerd[2004]: time="2025-12-16T12:27:06.101157743Z" level=info msg="Start subscribing containerd event" Dec 16 12:27:06.101960 containerd[2004]: time="2025-12-16T12:27:06.101278475Z" level=info msg="Start recovering state" Dec 16 12:27:06.101960 containerd[2004]: time="2025-12-16T12:27:06.101485523Z" level=info msg="Start event monitor" Dec 16 12:27:06.101960 containerd[2004]: time="2025-12-16T12:27:06.101542223Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:27:06.101960 containerd[2004]: time="2025-12-16T12:27:06.101561807Z" level=info msg="Start streaming server" Dec 16 12:27:06.101960 containerd[2004]: time="2025-12-16T12:27:06.101606519Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:27:06.101960 containerd[2004]: time="2025-12-16T12:27:06.101629751Z" level=info msg="runtime interface starting up..." Dec 16 12:27:06.101960 containerd[2004]: time="2025-12-16T12:27:06.101646179Z" level=info msg="starting plugins..." Dec 16 12:27:06.101960 containerd[2004]: time="2025-12-16T12:27:06.101701115Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:27:06.104782 containerd[2004]: time="2025-12-16T12:27:06.104210219Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:27:06.104782 containerd[2004]: time="2025-12-16T12:27:06.104445803Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:27:06.106252 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:27:06.109659 containerd[2004]: time="2025-12-16T12:27:06.108926039Z" level=info msg="containerd successfully booted in 0.550561s" Dec 16 12:27:06.153465 ntpd[2207]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:27:06.155545 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: ntpd 4.2.8p18@1.4062-o Fri Dec 12 14:43:59 UTC 2025 (1): Starting Dec 16 12:27:06.155545 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:27:06.155545 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: ---------------------------------------------------- Dec 16 12:27:06.155545 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:27:06.155545 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:27:06.155545 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: corporation. Support and training for ntp-4 are Dec 16 12:27:06.155545 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: available at https://www.nwtime.org/support Dec 16 12:27:06.155545 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: ---------------------------------------------------- Dec 16 12:27:06.155545 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: proto: precision = 0.096 usec (-23) Dec 16 12:27:06.153624 ntpd[2207]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 16 12:27:06.153643 ntpd[2207]: ---------------------------------------------------- Dec 16 12:27:06.153660 ntpd[2207]: ntp-4 is maintained by Network Time Foundation, Dec 16 12:27:06.153677 ntpd[2207]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 16 12:27:06.153693 ntpd[2207]: corporation. Support and training for ntp-4 are Dec 16 12:27:06.153711 ntpd[2207]: available at https://www.nwtime.org/support Dec 16 12:27:06.153727 ntpd[2207]: ---------------------------------------------------- Dec 16 12:27:06.154817 ntpd[2207]: proto: precision = 0.096 usec (-23) Dec 16 12:27:06.162544 ntpd[2207]: basedate set to 2025-11-30 Dec 16 12:27:06.165194 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: basedate set to 2025-11-30 Dec 16 12:27:06.165194 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: gps base set to 2025-11-30 (week 2395) Dec 16 12:27:06.165194 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:27:06.165194 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:27:06.165194 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:27:06.165194 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: Listen normally on 3 eth0 172.31.24.3:123 Dec 16 12:27:06.165194 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: Listen normally on 4 lo [::1]:123 Dec 16 12:27:06.165194 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: Listen normally on 5 eth0 [fe80::4f4:ceff:feb9:71f5%2]:123 Dec 16 12:27:06.165194 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: Listening on routing socket on fd #22 for interface updates Dec 16 12:27:06.162593 ntpd[2207]: gps base set to 2025-11-30 (week 2395) Dec 16 12:27:06.162733 ntpd[2207]: Listen and drop on 0 v6wildcard [::]:123 Dec 16 12:27:06.162778 ntpd[2207]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 16 12:27:06.163104 ntpd[2207]: Listen normally on 2 lo 127.0.0.1:123 Dec 16 12:27:06.163155 ntpd[2207]: Listen normally on 3 eth0 172.31.24.3:123 Dec 16 12:27:06.163198 ntpd[2207]: Listen normally on 4 lo [::1]:123 Dec 16 12:27:06.163241 ntpd[2207]: Listen normally on 5 eth0 [fe80::4f4:ceff:feb9:71f5%2]:123 Dec 16 12:27:06.163281 ntpd[2207]: Listening on routing socket on fd #22 for interface updates Dec 16 12:27:06.185264 polkitd[2160]: Started polkitd version 126 Dec 16 12:27:06.190867 amazon-ssm-agent[2145]: 2025-12-16 12:27:05.6803 INFO Checking if agent identity type EC2 can be assumed Dec 16 12:27:06.195377 ntpd[2207]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:27:06.196460 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:27:06.196460 ntpd[2207]: 16 Dec 12:27:06 ntpd[2207]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:27:06.195440 ntpd[2207]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 16 12:27:06.223352 polkitd[2160]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 12:27:06.225714 polkitd[2160]: Loading rules from directory /run/polkit-1/rules.d Dec 16 12:27:06.225804 polkitd[2160]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:27:06.226487 polkitd[2160]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 12:27:06.226551 polkitd[2160]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 12:27:06.226632 polkitd[2160]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 12:27:06.231170 polkitd[2160]: Finished loading, compiling and executing 2 rules Dec 16 12:27:06.233339 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 12:27:06.238493 dbus-daemon[1963]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 12:27:06.241671 polkitd[2160]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 12:27:06.283339 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.0354 INFO Agent will take identity from EC2 Dec 16 12:27:06.292758 systemd-hostnamed[2025]: Hostname set to (transient) Dec 16 12:27:06.292930 systemd-resolved[1889]: System hostname changed to 'ip-172-31-24-3'. Dec 16 12:27:06.385096 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.0438 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Dec 16 12:27:06.484309 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.0438 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 16 12:27:06.585229 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.0438 INFO [amazon-ssm-agent] Starting Core Agent Dec 16 12:27:06.685700 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.0438 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Dec 16 12:27:06.762125 tar[1979]: linux-arm64/README.md Dec 16 12:27:06.786035 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.0438 INFO [Registrar] Starting registrar module Dec 16 12:27:06.807901 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:27:06.888137 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.0481 INFO [EC2Identity] Checking disk for registration info Dec 16 12:27:06.988297 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.0482 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Dec 16 12:27:07.006094 amazon-ssm-agent[2145]: 2025/12/16 12:27:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:07.006094 amazon-ssm-agent[2145]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 16 12:27:07.006094 amazon-ssm-agent[2145]: 2025/12/16 12:27:07 processing appconfig overrides Dec 16 12:27:07.047673 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.0482 INFO [EC2Identity] Generating registration keypair Dec 16 12:27:07.047930 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.9568 INFO [EC2Identity] Checking write access before registering Dec 16 12:27:07.048142 amazon-ssm-agent[2145]: 2025-12-16 12:27:06.9575 INFO [EC2Identity] Registering EC2 instance with Systems Manager Dec 16 12:27:07.048266 amazon-ssm-agent[2145]: 2025-12-16 12:27:07.0027 INFO [EC2Identity] EC2 registration was successful. Dec 16 12:27:07.048414 amazon-ssm-agent[2145]: 2025-12-16 12:27:07.0028 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Dec 16 12:27:07.048524 amazon-ssm-agent[2145]: 2025-12-16 12:27:07.0029 INFO [CredentialRefresher] credentialRefresher has started Dec 16 12:27:07.048647 amazon-ssm-agent[2145]: 2025-12-16 12:27:07.0029 INFO [CredentialRefresher] Starting credentials refresher loop Dec 16 12:27:07.048764 amazon-ssm-agent[2145]: 2025-12-16 12:27:07.0470 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 16 12:27:07.048876 amazon-ssm-agent[2145]: 2025-12-16 12:27:07.0475 INFO [CredentialRefresher] Credentials ready Dec 16 12:27:07.088119 amazon-ssm-agent[2145]: 2025-12-16 12:27:07.0490 INFO [CredentialRefresher] Next credential rotation will be in 29.9999678049 minutes Dec 16 12:27:07.571411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:07.595747 (kubelet)[2228]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:27:07.712829 sshd_keygen[2016]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:27:07.757760 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:27:07.763717 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:27:07.768960 systemd[1]: Started sshd@0-172.31.24.3:22-139.178.89.65:57250.service - OpenSSH per-connection server daemon (139.178.89.65:57250). Dec 16 12:27:07.801090 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:27:07.801628 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:27:07.808585 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:27:07.853778 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:27:07.860562 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:27:07.867016 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 12:27:07.870190 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:27:07.876553 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:27:07.879448 systemd[1]: Startup finished in 3.789s (kernel) + 9.391s (initrd) + 9.855s (userspace) = 23.036s. Dec 16 12:27:08.041545 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 57250 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:08.045451 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:08.062007 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:27:08.066857 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:27:08.094173 systemd-logind[1973]: New session 1 of user core. Dec 16 12:27:08.113833 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:27:08.117877 amazon-ssm-agent[2145]: 2025-12-16 12:27:08.1177 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 16 12:27:08.122650 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:27:08.147678 (systemd)[2261]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:27:08.152857 systemd-logind[1973]: New session c1 of user core. Dec 16 12:27:08.219135 amazon-ssm-agent[2145]: 2025-12-16 12:27:08.1273 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2260) started Dec 16 12:27:08.321200 amazon-ssm-agent[2145]: 2025-12-16 12:27:08.1273 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 16 12:27:08.608347 systemd[2261]: Queued start job for default target default.target. Dec 16 12:27:08.615268 systemd[2261]: Created slice app.slice - User Application Slice. Dec 16 12:27:08.615332 systemd[2261]: Reached target paths.target - Paths. Dec 16 12:27:08.615549 systemd[2261]: Reached target timers.target - Timers. Dec 16 12:27:08.618577 systemd[2261]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:27:08.653535 systemd[2261]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:27:08.653816 systemd[2261]: Reached target sockets.target - Sockets. Dec 16 12:27:08.653926 systemd[2261]: Reached target basic.target - Basic System. Dec 16 12:27:08.654008 systemd[2261]: Reached target default.target - Main User Target. Dec 16 12:27:08.654290 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:27:08.656130 systemd[2261]: Startup finished in 482ms. Dec 16 12:27:08.669436 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:27:08.694975 kubelet[2228]: E1216 12:27:08.694910 2228 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:27:08.699693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:27:08.700096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:27:08.702333 systemd[1]: kubelet.service: Consumed 1.379s CPU time, 249.5M memory peak. Dec 16 12:27:08.829712 systemd[1]: Started sshd@1-172.31.24.3:22-139.178.89.65:57264.service - OpenSSH per-connection server daemon (139.178.89.65:57264). Dec 16 12:27:09.049678 sshd[2284]: Accepted publickey for core from 139.178.89.65 port 57264 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:09.052258 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:09.060356 systemd-logind[1973]: New session 2 of user core. Dec 16 12:27:09.068311 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:27:09.193227 sshd[2287]: Connection closed by 139.178.89.65 port 57264 Dec 16 12:27:09.193974 sshd-session[2284]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:09.200850 systemd[1]: sshd@1-172.31.24.3:22-139.178.89.65:57264.service: Deactivated successfully. Dec 16 12:27:09.204408 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 12:27:09.206482 systemd-logind[1973]: Session 2 logged out. Waiting for processes to exit. Dec 16 12:27:09.209686 systemd-logind[1973]: Removed session 2. Dec 16 12:27:09.229779 systemd[1]: Started sshd@2-172.31.24.3:22-139.178.89.65:57278.service - OpenSSH per-connection server daemon (139.178.89.65:57278). Dec 16 12:27:09.419389 sshd[2293]: Accepted publickey for core from 139.178.89.65 port 57278 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:09.421562 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:09.430858 systemd-logind[1973]: New session 3 of user core. Dec 16 12:27:09.436354 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:27:09.554096 sshd[2296]: Connection closed by 139.178.89.65 port 57278 Dec 16 12:27:09.554104 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:09.560364 systemd-logind[1973]: Session 3 logged out. Waiting for processes to exit. Dec 16 12:27:09.560622 systemd[1]: sshd@2-172.31.24.3:22-139.178.89.65:57278.service: Deactivated successfully. Dec 16 12:27:09.563568 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 12:27:09.567926 systemd-logind[1973]: Removed session 3. Dec 16 12:27:09.588150 systemd[1]: Started sshd@3-172.31.24.3:22-139.178.89.65:57294.service - OpenSSH per-connection server daemon (139.178.89.65:57294). Dec 16 12:27:09.780996 sshd[2302]: Accepted publickey for core from 139.178.89.65 port 57294 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:09.783915 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:09.793962 systemd-logind[1973]: New session 4 of user core. Dec 16 12:27:09.804374 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:27:09.932499 sshd[2305]: Connection closed by 139.178.89.65 port 57294 Dec 16 12:27:09.933759 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:09.943557 systemd[1]: sshd@3-172.31.24.3:22-139.178.89.65:57294.service: Deactivated successfully. Dec 16 12:27:09.948367 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:27:09.951577 systemd-logind[1973]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:27:09.972135 systemd-logind[1973]: Removed session 4. Dec 16 12:27:09.972556 systemd[1]: Started sshd@4-172.31.24.3:22-139.178.89.65:57300.service - OpenSSH per-connection server daemon (139.178.89.65:57300). Dec 16 12:27:10.162584 sshd[2311]: Accepted publickey for core from 139.178.89.65 port 57300 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:10.164913 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:10.172442 systemd-logind[1973]: New session 5 of user core. Dec 16 12:27:10.182310 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:27:10.298184 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:27:10.298799 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:27:10.314939 sudo[2315]: pam_unix(sudo:session): session closed for user root Dec 16 12:27:10.340098 sshd[2314]: Connection closed by 139.178.89.65 port 57300 Dec 16 12:27:10.338882 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:10.346564 systemd[1]: sshd@4-172.31.24.3:22-139.178.89.65:57300.service: Deactivated successfully. Dec 16 12:27:10.351025 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:27:10.353164 systemd-logind[1973]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:27:10.355955 systemd-logind[1973]: Removed session 5. Dec 16 12:27:10.376398 systemd[1]: Started sshd@5-172.31.24.3:22-139.178.89.65:57302.service - OpenSSH per-connection server daemon (139.178.89.65:57302). Dec 16 12:27:10.576347 sshd[2321]: Accepted publickey for core from 139.178.89.65 port 57302 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:10.578623 sshd-session[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:10.587281 systemd-logind[1973]: New session 6 of user core. Dec 16 12:27:10.592349 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:27:10.696126 sudo[2326]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:27:10.696733 sudo[2326]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:27:10.707268 sudo[2326]: pam_unix(sudo:session): session closed for user root Dec 16 12:27:10.716628 sudo[2325]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:27:10.717660 sudo[2325]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:27:10.734877 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:27:10.795798 augenrules[2348]: No rules Dec 16 12:27:10.798289 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:27:10.800220 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:27:10.801779 sudo[2325]: pam_unix(sudo:session): session closed for user root Dec 16 12:27:10.825250 sshd[2324]: Connection closed by 139.178.89.65 port 57302 Dec 16 12:27:10.825971 sshd-session[2321]: pam_unix(sshd:session): session closed for user core Dec 16 12:27:10.834354 systemd[1]: sshd@5-172.31.24.3:22-139.178.89.65:57302.service: Deactivated successfully. Dec 16 12:27:10.837494 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:27:10.839225 systemd-logind[1973]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:27:10.842040 systemd-logind[1973]: Removed session 6. Dec 16 12:27:10.861537 systemd[1]: Started sshd@6-172.31.24.3:22-139.178.89.65:60616.service - OpenSSH per-connection server daemon (139.178.89.65:60616). Dec 16 12:27:11.057464 sshd[2357]: Accepted publickey for core from 139.178.89.65 port 60616 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:27:11.059622 sshd-session[2357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:27:11.067441 systemd-logind[1973]: New session 7 of user core. Dec 16 12:27:11.074283 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:27:11.176252 sudo[2361]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:27:11.176826 sudo[2361]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:27:11.679364 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:27:11.694822 (dockerd)[2378]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:27:12.056091 dockerd[2378]: time="2025-12-16T12:27:12.054723196Z" level=info msg="Starting up" Dec 16 12:27:12.056710 dockerd[2378]: time="2025-12-16T12:27:12.056669549Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:27:12.077226 dockerd[2378]: time="2025-12-16T12:27:12.077161001Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:27:12.119920 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2373699010-merged.mount: Deactivated successfully. Dec 16 12:27:12.171748 systemd[1]: var-lib-docker-metacopy\x2dcheck3420933637-merged.mount: Deactivated successfully. Dec 16 12:27:12.186033 dockerd[2378]: time="2025-12-16T12:27:12.185768609Z" level=info msg="Loading containers: start." Dec 16 12:27:12.201109 kernel: Initializing XFRM netlink socket Dec 16 12:27:12.539492 (udev-worker)[2401]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:27:12.611369 systemd-networkd[1888]: docker0: Link UP Dec 16 12:27:12.623257 dockerd[2378]: time="2025-12-16T12:27:12.623054647Z" level=info msg="Loading containers: done." Dec 16 12:27:12.673601 dockerd[2378]: time="2025-12-16T12:27:12.673536296Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:27:12.673836 dockerd[2378]: time="2025-12-16T12:27:12.673715108Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:27:12.673893 dockerd[2378]: time="2025-12-16T12:27:12.673861712Z" level=info msg="Initializing buildkit" Dec 16 12:27:12.724503 dockerd[2378]: time="2025-12-16T12:27:12.724434632Z" level=info msg="Completed buildkit initialization" Dec 16 12:27:12.739671 dockerd[2378]: time="2025-12-16T12:27:12.739593272Z" level=info msg="Daemon has completed initialization" Dec 16 12:27:12.740821 dockerd[2378]: time="2025-12-16T12:27:12.740506148Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:27:12.739912 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:27:13.110363 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3472771545-merged.mount: Deactivated successfully. Dec 16 12:27:12.880788 systemd-resolved[1889]: Clock change detected. Flushing caches. Dec 16 12:27:12.891874 systemd-journald[1521]: Time jumped backwards, rotating. Dec 16 12:27:13.389030 containerd[2004]: time="2025-12-16T12:27:13.388525110Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:27:14.062801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694240228.mount: Deactivated successfully. Dec 16 12:27:15.489083 containerd[2004]: time="2025-12-16T12:27:15.489025292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:15.491394 containerd[2004]: time="2025-12-16T12:27:15.491336804Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Dec 16 12:27:15.493844 containerd[2004]: time="2025-12-16T12:27:15.493773956Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:15.500796 containerd[2004]: time="2025-12-16T12:27:15.500713521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:15.502994 containerd[2004]: time="2025-12-16T12:27:15.502774113Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.114192711s" Dec 16 12:27:15.502994 containerd[2004]: time="2025-12-16T12:27:15.502842789Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 12:27:15.503730 containerd[2004]: time="2025-12-16T12:27:15.503669001Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:27:16.915265 containerd[2004]: time="2025-12-16T12:27:16.915186816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:16.917161 containerd[2004]: time="2025-12-16T12:27:16.917104452Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Dec 16 12:27:16.919661 containerd[2004]: time="2025-12-16T12:27:16.919572816Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:16.926999 containerd[2004]: time="2025-12-16T12:27:16.924905256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:16.927340 containerd[2004]: time="2025-12-16T12:27:16.927286920Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.423555987s" Dec 16 12:27:16.927468 containerd[2004]: time="2025-12-16T12:27:16.927440028Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 12:27:16.928483 containerd[2004]: time="2025-12-16T12:27:16.928439640Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:27:18.096021 containerd[2004]: time="2025-12-16T12:27:18.095630517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:18.097469 containerd[2004]: time="2025-12-16T12:27:18.097403373Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Dec 16 12:27:18.099242 containerd[2004]: time="2025-12-16T12:27:18.098314713Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:18.103319 containerd[2004]: time="2025-12-16T12:27:18.103248177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:18.105807 containerd[2004]: time="2025-12-16T12:27:18.105745209Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.176653645s" Dec 16 12:27:18.106009 containerd[2004]: time="2025-12-16T12:27:18.105958917Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 12:27:18.106839 containerd[2004]: time="2025-12-16T12:27:18.106737033Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:27:18.676508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:27:18.683365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:19.177219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:19.196665 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:27:19.306989 kubelet[2669]: E1216 12:27:19.306898 2669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:27:19.319050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:27:19.320024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:27:19.321589 systemd[1]: kubelet.service: Consumed 353ms CPU time, 107.5M memory peak. Dec 16 12:27:19.687211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291573238.mount: Deactivated successfully. Dec 16 12:27:20.085937 containerd[2004]: time="2025-12-16T12:27:20.085760783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:20.088897 containerd[2004]: time="2025-12-16T12:27:20.088854839Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Dec 16 12:27:20.091203 containerd[2004]: time="2025-12-16T12:27:20.091161467Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:20.095645 containerd[2004]: time="2025-12-16T12:27:20.095587475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:20.096884 containerd[2004]: time="2025-12-16T12:27:20.096808955Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.989806854s" Dec 16 12:27:20.096884 containerd[2004]: time="2025-12-16T12:27:20.096876815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 12:27:20.097741 containerd[2004]: time="2025-12-16T12:27:20.097586975Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:27:20.677446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453006996.mount: Deactivated successfully. Dec 16 12:27:22.048053 containerd[2004]: time="2025-12-16T12:27:22.047542093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:22.049818 containerd[2004]: time="2025-12-16T12:27:22.049732165Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Dec 16 12:27:22.052875 containerd[2004]: time="2025-12-16T12:27:22.052777273Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:22.060016 containerd[2004]: time="2025-12-16T12:27:22.058956841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:22.061540 containerd[2004]: time="2025-12-16T12:27:22.061471309Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.963835182s" Dec 16 12:27:22.061704 containerd[2004]: time="2025-12-16T12:27:22.061672285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 12:27:22.062545 containerd[2004]: time="2025-12-16T12:27:22.062464297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:27:22.616440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3607136657.mount: Deactivated successfully. Dec 16 12:27:22.630941 containerd[2004]: time="2025-12-16T12:27:22.630860740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:22.633084 containerd[2004]: time="2025-12-16T12:27:22.633037132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Dec 16 12:27:22.635412 containerd[2004]: time="2025-12-16T12:27:22.635338816Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:22.639900 containerd[2004]: time="2025-12-16T12:27:22.639795856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:22.642358 containerd[2004]: time="2025-12-16T12:27:22.641356948Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 578.816475ms" Dec 16 12:27:22.642358 containerd[2004]: time="2025-12-16T12:27:22.641428516Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 12:27:22.642538 containerd[2004]: time="2025-12-16T12:27:22.642488224Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:27:23.319217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4077152011.mount: Deactivated successfully. Dec 16 12:27:26.724993 containerd[2004]: time="2025-12-16T12:27:26.724046264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:26.725730 containerd[2004]: time="2025-12-16T12:27:26.725634704Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Dec 16 12:27:26.727440 containerd[2004]: time="2025-12-16T12:27:26.727373876Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:26.734570 containerd[2004]: time="2025-12-16T12:27:26.734492444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:26.736932 containerd[2004]: time="2025-12-16T12:27:26.736861064Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.094311556s" Dec 16 12:27:26.736932 containerd[2004]: time="2025-12-16T12:27:26.736924292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 12:27:29.571065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:27:29.576333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:29.924192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:29.939494 (kubelet)[2819]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:27:30.020418 kubelet[2819]: E1216 12:27:30.020358 2819 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:27:30.024901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:27:30.026215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:27:30.027436 systemd[1]: kubelet.service: Consumed 316ms CPU time, 106.6M memory peak. Dec 16 12:27:32.917772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:32.918376 systemd[1]: kubelet.service: Consumed 316ms CPU time, 106.6M memory peak. Dec 16 12:27:32.934009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:32.977565 systemd[1]: Reload requested from client PID 2833 ('systemctl') (unit session-7.scope)... Dec 16 12:27:32.977803 systemd[1]: Reloading... Dec 16 12:27:33.246061 zram_generator::config[2880]: No configuration found. Dec 16 12:27:33.745026 systemd[1]: Reloading finished in 766 ms. Dec 16 12:27:33.822444 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:27:33.822629 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:27:33.823279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:33.823362 systemd[1]: kubelet.service: Consumed 256ms CPU time, 95M memory peak. Dec 16 12:27:33.826251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:34.166508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:34.179512 (kubelet)[2940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:27:34.254744 kubelet[2940]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:27:34.254744 kubelet[2940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:27:34.254744 kubelet[2940]: I1216 12:27:34.253308 2940 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:27:35.878251 kubelet[2940]: I1216 12:27:35.878201 2940 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:27:35.878814 kubelet[2940]: I1216 12:27:35.878792 2940 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:27:35.879020 kubelet[2940]: I1216 12:27:35.878945 2940 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:27:35.879124 kubelet[2940]: I1216 12:27:35.879104 2940 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:27:35.879574 kubelet[2940]: I1216 12:27:35.879552 2940 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:27:35.892165 kubelet[2940]: E1216 12:27:35.892103 2940 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:27:35.894079 kubelet[2940]: I1216 12:27:35.894029 2940 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:27:35.901505 kubelet[2940]: I1216 12:27:35.901447 2940 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:27:35.909206 kubelet[2940]: I1216 12:27:35.909150 2940 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:27:35.909629 kubelet[2940]: I1216 12:27:35.909564 2940 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:27:35.909914 kubelet[2940]: I1216 12:27:35.909623 2940 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:27:35.909914 kubelet[2940]: I1216 12:27:35.909901 2940 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:27:35.910162 kubelet[2940]: I1216 12:27:35.909922 2940 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:27:35.910162 kubelet[2940]: I1216 12:27:35.910118 2940 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:27:35.917978 kubelet[2940]: I1216 12:27:35.917886 2940 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:27:35.920485 kubelet[2940]: I1216 12:27:35.920449 2940 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:27:35.920583 kubelet[2940]: I1216 12:27:35.920491 2940 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:27:35.922131 kubelet[2940]: E1216 12:27:35.921615 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-3&limit=500&resourceVersion=0\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:27:35.922131 kubelet[2940]: I1216 12:27:35.921647 2940 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:27:35.922131 kubelet[2940]: I1216 12:27:35.921690 2940 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:27:35.924414 kubelet[2940]: I1216 12:27:35.924379 2940 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:27:35.925678 kubelet[2940]: I1216 12:27:35.925646 2940 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:27:35.925843 kubelet[2940]: I1216 12:27:35.925823 2940 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:27:35.926057 kubelet[2940]: W1216 12:27:35.926037 2940 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:27:35.931413 kubelet[2940]: I1216 12:27:35.931384 2940 server.go:1262] "Started kubelet" Dec 16 12:27:35.931875 kubelet[2940]: E1216 12:27:35.931834 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:27:35.934017 kubelet[2940]: I1216 12:27:35.933905 2940 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:27:35.935701 kubelet[2940]: I1216 12:27:35.935653 2940 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:27:35.938540 kubelet[2940]: I1216 12:27:35.938450 2940 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:27:35.938771 kubelet[2940]: I1216 12:27:35.938745 2940 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:27:35.939384 kubelet[2940]: I1216 12:27:35.939340 2940 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:27:35.941847 kubelet[2940]: E1216 12:27:35.939813 2940 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.3:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.3:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-3.1881b1d4aee473da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-3,UID:ip-172-31-24-3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-3,},FirstTimestamp:2025-12-16 12:27:35.931335642 +0000 UTC m=+1.744947574,LastTimestamp:2025-12-16 12:27:35.931335642 +0000 UTC m=+1.744947574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-3,}" Dec 16 12:27:35.944620 kubelet[2940]: I1216 12:27:35.944199 2940 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:27:35.946316 kubelet[2940]: I1216 12:27:35.946117 2940 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:27:35.951691 kubelet[2940]: E1216 12:27:35.950493 2940 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-3\" not found" Dec 16 12:27:35.951691 kubelet[2940]: I1216 12:27:35.950549 2940 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:27:35.951691 kubelet[2940]: I1216 12:27:35.950839 2940 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:27:35.951691 kubelet[2940]: I1216 12:27:35.950934 2940 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:27:35.953525 kubelet[2940]: E1216 12:27:35.953480 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:27:35.957386 kubelet[2940]: E1216 12:27:35.957280 2940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-3?timeout=10s\": dial tcp 172.31.24.3:6443: connect: connection refused" interval="200ms" Dec 16 12:27:35.958870 kubelet[2940]: E1216 12:27:35.958832 2940 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:27:35.959248 kubelet[2940]: I1216 12:27:35.959092 2940 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:27:35.959459 kubelet[2940]: I1216 12:27:35.959439 2940 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:27:35.959778 kubelet[2940]: I1216 12:27:35.959748 2940 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:27:35.999293 kubelet[2940]: I1216 12:27:35.999232 2940 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:27:36.003174 kubelet[2940]: I1216 12:27:36.003140 2940 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:27:36.004243 kubelet[2940]: I1216 12:27:36.004210 2940 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:27:36.005093 kubelet[2940]: I1216 12:27:36.004875 2940 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:27:36.005846 kubelet[2940]: I1216 12:27:36.005810 2940 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:27:36.006058 kubelet[2940]: I1216 12:27:36.006039 2940 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:27:36.006214 kubelet[2940]: I1216 12:27:36.006194 2940 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:27:36.007822 kubelet[2940]: E1216 12:27:36.007780 2940 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:27:36.007931 kubelet[2940]: E1216 12:27:36.007645 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:27:36.011941 kubelet[2940]: I1216 12:27:36.011524 2940 policy_none.go:49] "None policy: Start" Dec 16 12:27:36.011941 kubelet[2940]: I1216 12:27:36.011562 2940 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:27:36.011941 kubelet[2940]: I1216 12:27:36.011586 2940 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:27:36.015816 kubelet[2940]: I1216 12:27:36.015770 2940 policy_none.go:47] "Start" Dec 16 12:27:36.025242 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:27:36.043574 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:27:36.052493 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:27:36.053637 kubelet[2940]: E1216 12:27:36.053589 2940 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-3\" not found" Dec 16 12:27:36.064793 kubelet[2940]: E1216 12:27:36.064756 2940 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:27:36.064844 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 12:27:36.065990 kubelet[2940]: I1216 12:27:36.065895 2940 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:27:36.066264 kubelet[2940]: I1216 12:27:36.066164 2940 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:27:36.069258 kubelet[2940]: E1216 12:27:36.069221 2940 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:27:36.069603 kubelet[2940]: E1216 12:27:36.069569 2940 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-3\" not found" Dec 16 12:27:36.069876 kubelet[2940]: I1216 12:27:36.069847 2940 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:27:36.131012 systemd[1]: Created slice kubepods-burstable-pod61b6642bd1611096b6d3cba6caa192b3.slice - libcontainer container kubepods-burstable-pod61b6642bd1611096b6d3cba6caa192b3.slice. Dec 16 12:27:36.149403 kubelet[2940]: E1216 12:27:36.149354 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:36.154224 kubelet[2940]: I1216 12:27:36.154186 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61b6642bd1611096b6d3cba6caa192b3-ca-certs\") pod \"kube-apiserver-ip-172-31-24-3\" (UID: \"61b6642bd1611096b6d3cba6caa192b3\") " pod="kube-system/kube-apiserver-ip-172-31-24-3" Dec 16 12:27:36.154827 kubelet[2940]: I1216 12:27:36.154776 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:36.155078 kubelet[2940]: I1216 12:27:36.155041 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:36.155343 kubelet[2940]: I1216 12:27:36.155200 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:36.155794 kubelet[2940]: I1216 12:27:36.155509 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ab0cc926456cc71fcabfe4287ba30cb-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-3\" (UID: \"5ab0cc926456cc71fcabfe4287ba30cb\") " pod="kube-system/kube-scheduler-ip-172-31-24-3" Dec 16 12:27:36.155794 kubelet[2940]: I1216 12:27:36.155748 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61b6642bd1611096b6d3cba6caa192b3-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-3\" (UID: \"61b6642bd1611096b6d3cba6caa192b3\") " pod="kube-system/kube-apiserver-ip-172-31-24-3" Dec 16 12:27:36.156149 kubelet[2940]: I1216 12:27:36.156091 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61b6642bd1611096b6d3cba6caa192b3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-3\" (UID: \"61b6642bd1611096b6d3cba6caa192b3\") " pod="kube-system/kube-apiserver-ip-172-31-24-3" Dec 16 12:27:36.156593 kubelet[2940]: I1216 12:27:36.156254 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:36.156731 kubelet[2940]: I1216 12:27:36.156559 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:36.157527 systemd[1]: Created slice kubepods-burstable-pod48f6324cdf7b8f082d0f26f4f5117b56.slice - libcontainer container kubepods-burstable-pod48f6324cdf7b8f082d0f26f4f5117b56.slice. Dec 16 12:27:36.159937 kubelet[2940]: E1216 12:27:36.159859 2940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-3?timeout=10s\": dial tcp 172.31.24.3:6443: connect: connection refused" interval="400ms" Dec 16 12:27:36.165046 kubelet[2940]: E1216 12:27:36.163768 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:36.168775 systemd[1]: Created slice kubepods-burstable-pod5ab0cc926456cc71fcabfe4287ba30cb.slice - libcontainer container kubepods-burstable-pod5ab0cc926456cc71fcabfe4287ba30cb.slice. Dec 16 12:27:36.171285 kubelet[2940]: I1216 12:27:36.171232 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-3" Dec 16 12:27:36.172282 kubelet[2940]: E1216 12:27:36.172226 2940 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.3:6443/api/v1/nodes\": dial tcp 172.31.24.3:6443: connect: connection refused" node="ip-172-31-24-3" Dec 16 12:27:36.174088 kubelet[2940]: E1216 12:27:36.174029 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:36.375191 kubelet[2940]: I1216 12:27:36.375150 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-3" Dec 16 12:27:36.375651 kubelet[2940]: E1216 12:27:36.375605 2940 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.3:6443/api/v1/nodes\": dial tcp 172.31.24.3:6443: connect: connection refused" node="ip-172-31-24-3" Dec 16 12:27:36.457202 containerd[2004]: time="2025-12-16T12:27:36.457051997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-3,Uid:61b6642bd1611096b6d3cba6caa192b3,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:36.469302 containerd[2004]: time="2025-12-16T12:27:36.469233905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-3,Uid:48f6324cdf7b8f082d0f26f4f5117b56,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:36.480142 containerd[2004]: time="2025-12-16T12:27:36.480076529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-3,Uid:5ab0cc926456cc71fcabfe4287ba30cb,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:36.560848 kubelet[2940]: E1216 12:27:36.560791 2940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-3?timeout=10s\": dial tcp 172.31.24.3:6443: connect: connection refused" interval="800ms" Dec 16 12:27:36.772450 kubelet[2940]: E1216 12:27:36.772113 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:27:36.778210 kubelet[2940]: I1216 12:27:36.778166 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-3" Dec 16 12:27:36.779263 kubelet[2940]: E1216 12:27:36.779209 2940 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.3:6443/api/v1/nodes\": dial tcp 172.31.24.3:6443: connect: connection refused" node="ip-172-31-24-3" Dec 16 12:27:36.949677 kubelet[2940]: E1216 12:27:36.949586 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-3&limit=500&resourceVersion=0\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:27:36.974404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315146260.mount: Deactivated successfully. Dec 16 12:27:36.991467 containerd[2004]: time="2025-12-16T12:27:36.991387423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:27:36.996606 containerd[2004]: time="2025-12-16T12:27:36.996546727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 16 12:27:37.003318 containerd[2004]: time="2025-12-16T12:27:37.003265131Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:27:37.006099 containerd[2004]: time="2025-12-16T12:27:37.006020487Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:27:37.007912 containerd[2004]: time="2025-12-16T12:27:37.007837407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:27:37.009986 containerd[2004]: time="2025-12-16T12:27:37.009918867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:27:37.013053 containerd[2004]: time="2025-12-16T12:27:37.012864015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:27:37.020993 containerd[2004]: time="2025-12-16T12:27:37.020562195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:27:37.024082 containerd[2004]: time="2025-12-16T12:27:37.023900523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 563.39345ms" Dec 16 12:27:37.028643 containerd[2004]: time="2025-12-16T12:27:37.028583643Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 556.432622ms" Dec 16 12:27:37.068646 containerd[2004]: time="2025-12-16T12:27:37.068550388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 585.581331ms" Dec 16 12:27:37.074381 containerd[2004]: time="2025-12-16T12:27:37.074260132Z" level=info msg="connecting to shim 9d5d0d4348b99487e193955e28d8508d9a3d921b5f4427aeac1d4b24a497e841" address="unix:///run/containerd/s/44447338c27c1e5fcab5c7b15ecfeca85c7baa9f96ead163346256089a8a7cb4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:37.082661 containerd[2004]: time="2025-12-16T12:27:37.082404424Z" level=info msg="connecting to shim ba34bb4643cfa469ad7990841f9ef7a17646904b4ed838aedaa9b31c149853d9" address="unix:///run/containerd/s/6a30519dbf70bbe3b2b71f5425ba6f7ed985bee6587db7a0f56fea0e479cf497" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:37.089389 kubelet[2940]: E1216 12:27:37.089336 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:27:37.132455 containerd[2004]: time="2025-12-16T12:27:37.132394564Z" level=info msg="connecting to shim 0dc641b8321a13eae50ffb4b3934a9a6471baffdd0f4b3ce786975d211ed72e8" address="unix:///run/containerd/s/ac2bc0be54df86dfe383c6c77f52f63d2f6a1168bd7b1e8d345ad2a2f8cae64c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:37.146331 systemd[1]: Started cri-containerd-9d5d0d4348b99487e193955e28d8508d9a3d921b5f4427aeac1d4b24a497e841.scope - libcontainer container 9d5d0d4348b99487e193955e28d8508d9a3d921b5f4427aeac1d4b24a497e841. Dec 16 12:27:37.181325 systemd[1]: Started cri-containerd-ba34bb4643cfa469ad7990841f9ef7a17646904b4ed838aedaa9b31c149853d9.scope - libcontainer container ba34bb4643cfa469ad7990841f9ef7a17646904b4ed838aedaa9b31c149853d9. Dec 16 12:27:37.217233 systemd[1]: Started cri-containerd-0dc641b8321a13eae50ffb4b3934a9a6471baffdd0f4b3ce786975d211ed72e8.scope - libcontainer container 0dc641b8321a13eae50ffb4b3934a9a6471baffdd0f4b3ce786975d211ed72e8. Dec 16 12:27:37.327959 containerd[2004]: time="2025-12-16T12:27:37.327675569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-3,Uid:61b6642bd1611096b6d3cba6caa192b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba34bb4643cfa469ad7990841f9ef7a17646904b4ed838aedaa9b31c149853d9\"" Dec 16 12:27:37.341849 containerd[2004]: time="2025-12-16T12:27:37.341793245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-3,Uid:48f6324cdf7b8f082d0f26f4f5117b56,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d5d0d4348b99487e193955e28d8508d9a3d921b5f4427aeac1d4b24a497e841\"" Dec 16 12:27:37.348991 kubelet[2940]: E1216 12:27:37.347694 2940 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:27:37.356991 containerd[2004]: time="2025-12-16T12:27:37.356566181Z" level=info msg="CreateContainer within sandbox \"ba34bb4643cfa469ad7990841f9ef7a17646904b4ed838aedaa9b31c149853d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:27:37.362916 kubelet[2940]: E1216 12:27:37.362837 2940 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-3?timeout=10s\": dial tcp 172.31.24.3:6443: connect: connection refused" interval="1.6s" Dec 16 12:27:37.367768 containerd[2004]: time="2025-12-16T12:27:37.367691825Z" level=info msg="CreateContainer within sandbox \"9d5d0d4348b99487e193955e28d8508d9a3d921b5f4427aeac1d4b24a497e841\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:27:37.400569 containerd[2004]: time="2025-12-16T12:27:37.400520357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-3,Uid:5ab0cc926456cc71fcabfe4287ba30cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dc641b8321a13eae50ffb4b3934a9a6471baffdd0f4b3ce786975d211ed72e8\"" Dec 16 12:27:37.414860 containerd[2004]: time="2025-12-16T12:27:37.414790217Z" level=info msg="Container e9101238c97957d4d8233b9f1e3ccc1e6524082fbfa7dd3c29253d409219d448: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:37.417261 containerd[2004]: time="2025-12-16T12:27:37.417169637Z" level=info msg="CreateContainer within sandbox \"0dc641b8321a13eae50ffb4b3934a9a6471baffdd0f4b3ce786975d211ed72e8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:27:37.421752 containerd[2004]: time="2025-12-16T12:27:37.420546353Z" level=info msg="Container 025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:37.440849 containerd[2004]: time="2025-12-16T12:27:37.440773661Z" level=info msg="CreateContainer within sandbox \"9d5d0d4348b99487e193955e28d8508d9a3d921b5f4427aeac1d4b24a497e841\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18\"" Dec 16 12:27:37.442945 containerd[2004]: time="2025-12-16T12:27:37.442888986Z" level=info msg="StartContainer for \"025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18\"" Dec 16 12:27:37.446523 containerd[2004]: time="2025-12-16T12:27:37.446427174Z" level=info msg="connecting to shim 025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18" address="unix:///run/containerd/s/44447338c27c1e5fcab5c7b15ecfeca85c7baa9f96ead163346256089a8a7cb4" protocol=ttrpc version=3 Dec 16 12:27:37.468691 containerd[2004]: time="2025-12-16T12:27:37.468616194Z" level=info msg="CreateContainer within sandbox \"ba34bb4643cfa469ad7990841f9ef7a17646904b4ed838aedaa9b31c149853d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e9101238c97957d4d8233b9f1e3ccc1e6524082fbfa7dd3c29253d409219d448\"" Dec 16 12:27:37.470398 containerd[2004]: time="2025-12-16T12:27:37.469727982Z" level=info msg="StartContainer for \"e9101238c97957d4d8233b9f1e3ccc1e6524082fbfa7dd3c29253d409219d448\"" Dec 16 12:27:37.475030 containerd[2004]: time="2025-12-16T12:27:37.474493386Z" level=info msg="Container 29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:37.475935 containerd[2004]: time="2025-12-16T12:27:37.475863354Z" level=info msg="connecting to shim e9101238c97957d4d8233b9f1e3ccc1e6524082fbfa7dd3c29253d409219d448" address="unix:///run/containerd/s/6a30519dbf70bbe3b2b71f5425ba6f7ed985bee6587db7a0f56fea0e479cf497" protocol=ttrpc version=3 Dec 16 12:27:37.500322 systemd[1]: Started cri-containerd-025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18.scope - libcontainer container 025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18. Dec 16 12:27:37.504295 containerd[2004]: time="2025-12-16T12:27:37.504187506Z" level=info msg="CreateContainer within sandbox \"0dc641b8321a13eae50ffb4b3934a9a6471baffdd0f4b3ce786975d211ed72e8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d\"" Dec 16 12:27:37.507436 containerd[2004]: time="2025-12-16T12:27:37.507121266Z" level=info msg="StartContainer for \"29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d\"" Dec 16 12:27:37.512608 containerd[2004]: time="2025-12-16T12:27:37.512538582Z" level=info msg="connecting to shim 29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d" address="unix:///run/containerd/s/ac2bc0be54df86dfe383c6c77f52f63d2f6a1168bd7b1e8d345ad2a2f8cae64c" protocol=ttrpc version=3 Dec 16 12:27:37.533632 systemd[1]: Started cri-containerd-e9101238c97957d4d8233b9f1e3ccc1e6524082fbfa7dd3c29253d409219d448.scope - libcontainer container e9101238c97957d4d8233b9f1e3ccc1e6524082fbfa7dd3c29253d409219d448. Dec 16 12:27:37.582291 systemd[1]: Started cri-containerd-29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d.scope - libcontainer container 29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d. Dec 16 12:27:37.586573 kubelet[2940]: I1216 12:27:37.586507 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-3" Dec 16 12:27:37.588567 kubelet[2940]: E1216 12:27:37.588515 2940 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.3:6443/api/v1/nodes\": dial tcp 172.31.24.3:6443: connect: connection refused" node="ip-172-31-24-3" Dec 16 12:27:37.669543 containerd[2004]: time="2025-12-16T12:27:37.669392563Z" level=info msg="StartContainer for \"025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18\" returns successfully" Dec 16 12:27:37.728587 containerd[2004]: time="2025-12-16T12:27:37.728448775Z" level=info msg="StartContainer for \"e9101238c97957d4d8233b9f1e3ccc1e6524082fbfa7dd3c29253d409219d448\" returns successfully" Dec 16 12:27:37.776843 containerd[2004]: time="2025-12-16T12:27:37.776434579Z" level=info msg="StartContainer for \"29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d\" returns successfully" Dec 16 12:27:37.970844 kubelet[2940]: E1216 12:27:37.970764 2940 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.3:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:27:38.025275 kubelet[2940]: E1216 12:27:38.025188 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:38.033382 kubelet[2940]: E1216 12:27:38.031837 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:38.037997 kubelet[2940]: E1216 12:27:38.036750 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:39.040195 kubelet[2940]: E1216 12:27:39.039631 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:39.040195 kubelet[2940]: E1216 12:27:39.039926 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:39.194405 kubelet[2940]: I1216 12:27:39.194026 2940 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-3" Dec 16 12:27:40.042384 kubelet[2940]: E1216 12:27:40.042340 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:40.895005 kubelet[2940]: E1216 12:27:40.893767 2940 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-3\" not found" node="ip-172-31-24-3" Dec 16 12:27:41.830755 kubelet[2940]: I1216 12:27:41.830683 2940 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-3" Dec 16 12:27:41.830755 kubelet[2940]: E1216 12:27:41.830743 2940 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-3\": node \"ip-172-31-24-3\" not found" Dec 16 12:27:41.857564 kubelet[2940]: I1216 12:27:41.857523 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-3" Dec 16 12:27:41.923326 kubelet[2940]: E1216 12:27:41.923181 2940 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-3.1881b1d4aee473da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-3,UID:ip-172-31-24-3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-3,},FirstTimestamp:2025-12-16 12:27:35.931335642 +0000 UTC m=+1.744947574,LastTimestamp:2025-12-16 12:27:35.931335642 +0000 UTC m=+1.744947574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-3,}" Dec 16 12:27:41.926353 kubelet[2940]: I1216 12:27:41.926039 2940 apiserver.go:52] "Watching apiserver" Dec 16 12:27:41.951866 kubelet[2940]: I1216 12:27:41.951823 2940 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:27:41.959712 kubelet[2940]: E1216 12:27:41.959650 2940 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-3" Dec 16 12:27:41.959712 kubelet[2940]: I1216 12:27:41.959701 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:41.969337 kubelet[2940]: E1216 12:27:41.969282 2940 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 16 12:27:41.982391 kubelet[2940]: E1216 12:27:41.982088 2940 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:41.982391 kubelet[2940]: I1216 12:27:41.982134 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-3" Dec 16 12:27:41.992359 kubelet[2940]: E1216 12:27:41.992310 2940 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-3\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-3" Dec 16 12:27:42.003409 kubelet[2940]: E1216 12:27:42.003254 2940 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-3.1881b1d4b0879082 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-3,UID:ip-172-31-24-3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-24-3,},FirstTimestamp:2025-12-16 12:27:35.958802562 +0000 UTC m=+1.772414506,LastTimestamp:2025-12-16 12:27:35.958802562 +0000 UTC m=+1.772414506,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-3,}" Dec 16 12:27:47.121251 systemd[1]: Reload requested from client PID 3230 ('systemctl') (unit session-7.scope)... Dec 16 12:27:47.121283 systemd[1]: Reloading... Dec 16 12:27:47.365033 zram_generator::config[3274]: No configuration found. Dec 16 12:27:47.770076 kubelet[2940]: I1216 12:27:47.769955 2940 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-3" Dec 16 12:27:47.932307 systemd[1]: Reloading finished in 810 ms. Dec 16 12:27:47.989172 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:48.007591 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:27:48.008165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:48.008264 systemd[1]: kubelet.service: Consumed 2.693s CPU time, 123.6M memory peak. Dec 16 12:27:48.013517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:27:48.406916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:27:48.426612 (kubelet)[3334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:27:48.536615 kubelet[3334]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:27:48.538064 kubelet[3334]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:27:48.538064 kubelet[3334]: I1216 12:27:48.537322 3334 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:27:48.551327 kubelet[3334]: I1216 12:27:48.551283 3334 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:27:48.551654 kubelet[3334]: I1216 12:27:48.551631 3334 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:27:48.551905 kubelet[3334]: I1216 12:27:48.551864 3334 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:27:48.552268 kubelet[3334]: I1216 12:27:48.552243 3334 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:27:48.553057 kubelet[3334]: I1216 12:27:48.553030 3334 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:27:48.558723 kubelet[3334]: I1216 12:27:48.558685 3334 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:27:48.567690 kubelet[3334]: I1216 12:27:48.567451 3334 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:27:48.584405 kubelet[3334]: I1216 12:27:48.584356 3334 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:27:48.598266 kubelet[3334]: I1216 12:27:48.598147 3334 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:27:48.598814 kubelet[3334]: I1216 12:27:48.598756 3334 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:27:48.599226 kubelet[3334]: I1216 12:27:48.598811 3334 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:27:48.599901 kubelet[3334]: I1216 12:27:48.599230 3334 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:27:48.599901 kubelet[3334]: I1216 12:27:48.599280 3334 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:27:48.599901 kubelet[3334]: I1216 12:27:48.599326 3334 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:27:48.601671 kubelet[3334]: I1216 12:27:48.601637 3334 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:27:48.601999 kubelet[3334]: I1216 12:27:48.601950 3334 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:27:48.602079 kubelet[3334]: I1216 12:27:48.602004 3334 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:27:48.602079 kubelet[3334]: I1216 12:27:48.602051 3334 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:27:48.602079 kubelet[3334]: I1216 12:27:48.602079 3334 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:27:48.608282 kubelet[3334]: I1216 12:27:48.606429 3334 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:27:48.608282 kubelet[3334]: I1216 12:27:48.607562 3334 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:27:48.608282 kubelet[3334]: I1216 12:27:48.607616 3334 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:27:48.621589 kubelet[3334]: I1216 12:27:48.621542 3334 server.go:1262] "Started kubelet" Dec 16 12:27:48.625662 kubelet[3334]: I1216 12:27:48.625611 3334 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:27:48.627002 kubelet[3334]: I1216 12:27:48.626872 3334 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:27:48.635147 kubelet[3334]: I1216 12:27:48.635052 3334 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:27:48.635289 kubelet[3334]: I1216 12:27:48.635159 3334 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:27:48.635563 kubelet[3334]: I1216 12:27:48.635522 3334 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:27:48.642420 kubelet[3334]: I1216 12:27:48.642368 3334 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:27:48.647729 kubelet[3334]: I1216 12:27:48.647681 3334 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:27:48.651767 kubelet[3334]: I1216 12:27:48.651709 3334 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:27:48.651946 kubelet[3334]: I1216 12:27:48.651894 3334 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:27:48.654737 kubelet[3334]: I1216 12:27:48.654685 3334 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:27:48.654880 kubelet[3334]: E1216 12:27:48.654857 3334 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-3\" not found" Dec 16 12:27:48.656573 kubelet[3334]: I1216 12:27:48.656524 3334 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:27:48.657457 kubelet[3334]: I1216 12:27:48.656766 3334 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:27:48.679070 kubelet[3334]: I1216 12:27:48.678917 3334 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:27:48.769078 kubelet[3334]: E1216 12:27:48.768321 3334 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-24-3\" not found" Dec 16 12:27:48.828927 kubelet[3334]: I1216 12:27:48.828866 3334 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:27:48.836094 kubelet[3334]: I1216 12:27:48.835765 3334 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:27:48.836094 kubelet[3334]: I1216 12:27:48.836055 3334 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:27:48.836094 kubelet[3334]: I1216 12:27:48.836094 3334 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:27:48.836311 kubelet[3334]: E1216 12:27:48.836171 3334 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:27:48.921308 kubelet[3334]: I1216 12:27:48.921258 3334 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:27:48.922179 kubelet[3334]: I1216 12:27:48.922121 3334 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:27:48.922567 kubelet[3334]: I1216 12:27:48.922319 3334 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:27:48.923789 kubelet[3334]: I1216 12:27:48.923126 3334 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:27:48.923789 kubelet[3334]: I1216 12:27:48.923157 3334 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:27:48.923789 kubelet[3334]: I1216 12:27:48.923192 3334 policy_none.go:49] "None policy: Start" Dec 16 12:27:48.923789 kubelet[3334]: I1216 12:27:48.923212 3334 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:27:48.923789 kubelet[3334]: I1216 12:27:48.923234 3334 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:27:48.927394 kubelet[3334]: I1216 12:27:48.927341 3334 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 12:27:48.927394 kubelet[3334]: I1216 12:27:48.927393 3334 policy_none.go:47] "Start" Dec 16 12:27:48.936792 kubelet[3334]: E1216 12:27:48.936354 3334 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 12:27:48.949957 kubelet[3334]: E1216 12:27:48.949129 3334 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:27:48.949957 kubelet[3334]: I1216 12:27:48.949411 3334 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:27:48.949957 kubelet[3334]: I1216 12:27:48.949430 3334 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:27:48.951878 kubelet[3334]: I1216 12:27:48.951843 3334 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:27:48.955259 kubelet[3334]: E1216 12:27:48.955212 3334 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:27:49.083690 kubelet[3334]: I1216 12:27:49.083657 3334 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-3" Dec 16 12:27:49.109407 kubelet[3334]: I1216 12:27:49.109286 3334 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-3" Dec 16 12:27:49.109688 kubelet[3334]: I1216 12:27:49.109597 3334 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-3" Dec 16 12:27:49.140894 kubelet[3334]: I1216 12:27:49.138520 3334 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-3" Dec 16 12:27:49.141329 kubelet[3334]: I1216 12:27:49.140454 3334 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-3" Dec 16 12:27:49.142174 kubelet[3334]: I1216 12:27:49.140819 3334 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:49.170995 kubelet[3334]: I1216 12:27:49.170630 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:49.171878 kubelet[3334]: I1216 12:27:49.171773 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:49.172720 kubelet[3334]: I1216 12:27:49.172610 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:49.173715 kubelet[3334]: I1216 12:27:49.173302 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ab0cc926456cc71fcabfe4287ba30cb-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-3\" (UID: \"5ab0cc926456cc71fcabfe4287ba30cb\") " pod="kube-system/kube-scheduler-ip-172-31-24-3" Dec 16 12:27:49.174997 kubelet[3334]: I1216 12:27:49.173931 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61b6642bd1611096b6d3cba6caa192b3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-3\" (UID: \"61b6642bd1611096b6d3cba6caa192b3\") " pod="kube-system/kube-apiserver-ip-172-31-24-3" Dec 16 12:27:49.174997 kubelet[3334]: I1216 12:27:49.174535 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:49.174997 kubelet[3334]: I1216 12:27:49.174584 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48f6324cdf7b8f082d0f26f4f5117b56-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-3\" (UID: \"48f6324cdf7b8f082d0f26f4f5117b56\") " pod="kube-system/kube-controller-manager-ip-172-31-24-3" Dec 16 12:27:49.174997 kubelet[3334]: I1216 12:27:49.174631 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61b6642bd1611096b6d3cba6caa192b3-ca-certs\") pod \"kube-apiserver-ip-172-31-24-3\" (UID: \"61b6642bd1611096b6d3cba6caa192b3\") " pod="kube-system/kube-apiserver-ip-172-31-24-3" Dec 16 12:27:49.174997 kubelet[3334]: I1216 12:27:49.174666 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61b6642bd1611096b6d3cba6caa192b3-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-3\" (UID: \"61b6642bd1611096b6d3cba6caa192b3\") " pod="kube-system/kube-apiserver-ip-172-31-24-3" Dec 16 12:27:49.176624 kubelet[3334]: E1216 12:27:49.176534 3334 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-3\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-3" Dec 16 12:27:49.569527 update_engine[1974]: I20251216 12:27:49.569422 1974 update_attempter.cc:509] Updating boot flags... Dec 16 12:27:49.607372 kubelet[3334]: I1216 12:27:49.607166 3334 apiserver.go:52] "Watching apiserver" Dec 16 12:27:49.657143 kubelet[3334]: I1216 12:27:49.657060 3334 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:27:49.890757 kubelet[3334]: I1216 12:27:49.890629 3334 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-3" Dec 16 12:27:49.912615 kubelet[3334]: E1216 12:27:49.910679 3334 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-3\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-3" Dec 16 12:27:50.127158 kubelet[3334]: I1216 12:27:50.125871 3334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-3" podStartSLOduration=1.125849273 podStartE2EDuration="1.125849273s" podCreationTimestamp="2025-12-16 12:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:50.016397548 +0000 UTC m=+1.579523145" watchObservedRunningTime="2025-12-16 12:27:50.125849273 +0000 UTC m=+1.688974834" Dec 16 12:27:50.165080 kubelet[3334]: I1216 12:27:50.164929 3334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-3" podStartSLOduration=1.164881673 podStartE2EDuration="1.164881673s" podCreationTimestamp="2025-12-16 12:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:50.127525745 +0000 UTC m=+1.690651342" watchObservedRunningTime="2025-12-16 12:27:50.164881673 +0000 UTC m=+1.728007258" Dec 16 12:27:51.939175 kubelet[3334]: I1216 12:27:51.939114 3334 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:27:51.941022 containerd[2004]: time="2025-12-16T12:27:51.940867042Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:27:51.942473 kubelet[3334]: I1216 12:27:51.942436 3334 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:27:53.021375 systemd[1]: Created slice kubepods-besteffort-pod589c9af5_3a54_4a2b_8dc9_3719385834e6.slice - libcontainer container kubepods-besteffort-pod589c9af5_3a54_4a2b_8dc9_3719385834e6.slice. Dec 16 12:27:53.129212 kubelet[3334]: I1216 12:27:53.129137 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/589c9af5-3a54-4a2b-8dc9-3719385834e6-kube-proxy\") pod \"kube-proxy-ztcrc\" (UID: \"589c9af5-3a54-4a2b-8dc9-3719385834e6\") " pod="kube-system/kube-proxy-ztcrc" Dec 16 12:27:53.129212 kubelet[3334]: I1216 12:27:53.129208 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/589c9af5-3a54-4a2b-8dc9-3719385834e6-xtables-lock\") pod \"kube-proxy-ztcrc\" (UID: \"589c9af5-3a54-4a2b-8dc9-3719385834e6\") " pod="kube-system/kube-proxy-ztcrc" Dec 16 12:27:53.129910 kubelet[3334]: I1216 12:27:53.129250 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/589c9af5-3a54-4a2b-8dc9-3719385834e6-lib-modules\") pod \"kube-proxy-ztcrc\" (UID: \"589c9af5-3a54-4a2b-8dc9-3719385834e6\") " pod="kube-system/kube-proxy-ztcrc" Dec 16 12:27:53.129910 kubelet[3334]: I1216 12:27:53.129294 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4p74\" (UniqueName: \"kubernetes.io/projected/589c9af5-3a54-4a2b-8dc9-3719385834e6-kube-api-access-j4p74\") pod \"kube-proxy-ztcrc\" (UID: \"589c9af5-3a54-4a2b-8dc9-3719385834e6\") " pod="kube-system/kube-proxy-ztcrc" Dec 16 12:27:53.175409 systemd[1]: Created slice kubepods-besteffort-podfb52aafc_864d_43eb_80fc_7cb77f58c6ec.slice - libcontainer container kubepods-besteffort-podfb52aafc_864d_43eb_80fc_7cb77f58c6ec.slice. Dec 16 12:27:53.230778 kubelet[3334]: I1216 12:27:53.230629 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m2mz\" (UniqueName: \"kubernetes.io/projected/fb52aafc-864d-43eb-80fc-7cb77f58c6ec-kube-api-access-5m2mz\") pod \"tigera-operator-65cdcdfd6d-hnnl5\" (UID: \"fb52aafc-864d-43eb-80fc-7cb77f58c6ec\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hnnl5" Dec 16 12:27:53.230778 kubelet[3334]: I1216 12:27:53.230725 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb52aafc-864d-43eb-80fc-7cb77f58c6ec-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-hnnl5\" (UID: \"fb52aafc-864d-43eb-80fc-7cb77f58c6ec\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hnnl5" Dec 16 12:27:53.342307 containerd[2004]: time="2025-12-16T12:27:53.341566244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztcrc,Uid:589c9af5-3a54-4a2b-8dc9-3719385834e6,Namespace:kube-system,Attempt:0,}" Dec 16 12:27:53.393467 containerd[2004]: time="2025-12-16T12:27:53.393377709Z" level=info msg="connecting to shim 1b952682c6e4c6e2f646cbb835faaa2cc465e99756069628707473b57e18811c" address="unix:///run/containerd/s/ca9a2986313e9956a4f114e14818ec1f5db27afaf6afb484c4cb6b853f9a83c8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:53.455362 systemd[1]: Started cri-containerd-1b952682c6e4c6e2f646cbb835faaa2cc465e99756069628707473b57e18811c.scope - libcontainer container 1b952682c6e4c6e2f646cbb835faaa2cc465e99756069628707473b57e18811c. Dec 16 12:27:53.490117 containerd[2004]: time="2025-12-16T12:27:53.490035585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hnnl5,Uid:fb52aafc-864d-43eb-80fc-7cb77f58c6ec,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:27:53.518032 containerd[2004]: time="2025-12-16T12:27:53.517920465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ztcrc,Uid:589c9af5-3a54-4a2b-8dc9-3719385834e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b952682c6e4c6e2f646cbb835faaa2cc465e99756069628707473b57e18811c\"" Dec 16 12:27:53.531692 containerd[2004]: time="2025-12-16T12:27:53.531599901Z" level=info msg="CreateContainer within sandbox \"1b952682c6e4c6e2f646cbb835faaa2cc465e99756069628707473b57e18811c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:27:53.566612 containerd[2004]: time="2025-12-16T12:27:53.566472094Z" level=info msg="connecting to shim 6bf9b126a452fe2f813681f29787dedb70b666573148ec3be0da24474d515792" address="unix:///run/containerd/s/57fd99bc8ae684a7d7d6d7a7d86b416e555bb182349f4f70fb12fa35e774f4cf" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:27:53.576276 containerd[2004]: time="2025-12-16T12:27:53.576217786Z" level=info msg="Container 0bf9c721d2c99f7d0d11cb679b6c8b32a7b25a5b9c505f5507602638c38e9968: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:53.596430 containerd[2004]: time="2025-12-16T12:27:53.595717006Z" level=info msg="CreateContainer within sandbox \"1b952682c6e4c6e2f646cbb835faaa2cc465e99756069628707473b57e18811c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0bf9c721d2c99f7d0d11cb679b6c8b32a7b25a5b9c505f5507602638c38e9968\"" Dec 16 12:27:53.601570 containerd[2004]: time="2025-12-16T12:27:53.601462810Z" level=info msg="StartContainer for \"0bf9c721d2c99f7d0d11cb679b6c8b32a7b25a5b9c505f5507602638c38e9968\"" Dec 16 12:27:53.609399 containerd[2004]: time="2025-12-16T12:27:53.609322234Z" level=info msg="connecting to shim 0bf9c721d2c99f7d0d11cb679b6c8b32a7b25a5b9c505f5507602638c38e9968" address="unix:///run/containerd/s/ca9a2986313e9956a4f114e14818ec1f5db27afaf6afb484c4cb6b853f9a83c8" protocol=ttrpc version=3 Dec 16 12:27:53.619301 systemd[1]: Started cri-containerd-6bf9b126a452fe2f813681f29787dedb70b666573148ec3be0da24474d515792.scope - libcontainer container 6bf9b126a452fe2f813681f29787dedb70b666573148ec3be0da24474d515792. Dec 16 12:27:53.664420 systemd[1]: Started cri-containerd-0bf9c721d2c99f7d0d11cb679b6c8b32a7b25a5b9c505f5507602638c38e9968.scope - libcontainer container 0bf9c721d2c99f7d0d11cb679b6c8b32a7b25a5b9c505f5507602638c38e9968. Dec 16 12:27:53.787780 containerd[2004]: time="2025-12-16T12:27:53.787697543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hnnl5,Uid:fb52aafc-864d-43eb-80fc-7cb77f58c6ec,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6bf9b126a452fe2f813681f29787dedb70b666573148ec3be0da24474d515792\"" Dec 16 12:27:53.796103 containerd[2004]: time="2025-12-16T12:27:53.795751751Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:27:53.827371 containerd[2004]: time="2025-12-16T12:27:53.827204111Z" level=info msg="StartContainer for \"0bf9c721d2c99f7d0d11cb679b6c8b32a7b25a5b9c505f5507602638c38e9968\" returns successfully" Dec 16 12:27:55.307132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount589264679.mount: Deactivated successfully. Dec 16 12:27:56.291426 kubelet[3334]: I1216 12:27:56.291252 3334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ztcrc" podStartSLOduration=4.291227267 podStartE2EDuration="4.291227267s" podCreationTimestamp="2025-12-16 12:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:27:53.972758532 +0000 UTC m=+5.535884105" watchObservedRunningTime="2025-12-16 12:27:56.291227267 +0000 UTC m=+7.854352828" Dec 16 12:27:56.471033 containerd[2004]: time="2025-12-16T12:27:56.470275920Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:56.471864 containerd[2004]: time="2025-12-16T12:27:56.471779952Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 16 12:27:56.473587 containerd[2004]: time="2025-12-16T12:27:56.473505096Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:56.477705 containerd[2004]: time="2025-12-16T12:27:56.477622248Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:27:56.479474 containerd[2004]: time="2025-12-16T12:27:56.479399184Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.683351549s" Dec 16 12:27:56.479474 containerd[2004]: time="2025-12-16T12:27:56.479466120Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:27:56.485675 containerd[2004]: time="2025-12-16T12:27:56.485588052Z" level=info msg="CreateContainer within sandbox \"6bf9b126a452fe2f813681f29787dedb70b666573148ec3be0da24474d515792\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:27:56.500281 containerd[2004]: time="2025-12-16T12:27:56.500205264Z" level=info msg="Container 9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:27:56.510501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2613979814.mount: Deactivated successfully. Dec 16 12:27:56.517939 containerd[2004]: time="2025-12-16T12:27:56.517847664Z" level=info msg="CreateContainer within sandbox \"6bf9b126a452fe2f813681f29787dedb70b666573148ec3be0da24474d515792\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8\"" Dec 16 12:27:56.520168 containerd[2004]: time="2025-12-16T12:27:56.520059552Z" level=info msg="StartContainer for \"9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8\"" Dec 16 12:27:56.524280 containerd[2004]: time="2025-12-16T12:27:56.523071696Z" level=info msg="connecting to shim 9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8" address="unix:///run/containerd/s/57fd99bc8ae684a7d7d6d7a7d86b416e555bb182349f4f70fb12fa35e774f4cf" protocol=ttrpc version=3 Dec 16 12:27:56.570324 systemd[1]: Started cri-containerd-9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8.scope - libcontainer container 9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8. Dec 16 12:27:56.634627 containerd[2004]: time="2025-12-16T12:27:56.634511785Z" level=info msg="StartContainer for \"9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8\" returns successfully" Dec 16 12:27:59.635616 kubelet[3334]: I1216 12:27:59.635400 3334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-hnnl5" podStartSLOduration=3.947886771 podStartE2EDuration="6.635376868s" podCreationTimestamp="2025-12-16 12:27:53 +0000 UTC" firstStartedPulling="2025-12-16 12:27:53.793806659 +0000 UTC m=+5.356932220" lastFinishedPulling="2025-12-16 12:27:56.481296756 +0000 UTC m=+8.044422317" observedRunningTime="2025-12-16 12:27:57.004716239 +0000 UTC m=+8.567841848" watchObservedRunningTime="2025-12-16 12:27:59.635376868 +0000 UTC m=+11.198502537" Dec 16 12:28:03.867706 sudo[2361]: pam_unix(sudo:session): session closed for user root Dec 16 12:28:03.892835 sshd[2360]: Connection closed by 139.178.89.65 port 60616 Dec 16 12:28:03.897326 sshd-session[2357]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:03.909496 systemd[1]: sshd@6-172.31.24.3:22-139.178.89.65:60616.service: Deactivated successfully. Dec 16 12:28:03.918151 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:28:03.918651 systemd[1]: session-7.scope: Consumed 10.083s CPU time, 221.6M memory peak. Dec 16 12:28:03.925514 systemd-logind[1973]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:28:03.933683 systemd-logind[1973]: Removed session 7. Dec 16 12:28:24.752479 kubelet[3334]: E1216 12:28:24.750657 3334 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-24-3\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-24-3' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Dec 16 12:28:24.754335 systemd[1]: Created slice kubepods-besteffort-poda54cdd9b_a5e8_4eb2_970f_a5034d59937b.slice - libcontainer container kubepods-besteffort-poda54cdd9b_a5e8_4eb2_970f_a5034d59937b.slice. Dec 16 12:28:24.864612 kubelet[3334]: I1216 12:28:24.864352 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a54cdd9b-a5e8-4eb2-970f-a5034d59937b-typha-certs\") pod \"calico-typha-5d4dccdd6-bjwhl\" (UID: \"a54cdd9b-a5e8-4eb2-970f-a5034d59937b\") " pod="calico-system/calico-typha-5d4dccdd6-bjwhl" Dec 16 12:28:24.864612 kubelet[3334]: I1216 12:28:24.864433 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbhfk\" (UniqueName: \"kubernetes.io/projected/a54cdd9b-a5e8-4eb2-970f-a5034d59937b-kube-api-access-nbhfk\") pod \"calico-typha-5d4dccdd6-bjwhl\" (UID: \"a54cdd9b-a5e8-4eb2-970f-a5034d59937b\") " pod="calico-system/calico-typha-5d4dccdd6-bjwhl" Dec 16 12:28:24.864612 kubelet[3334]: I1216 12:28:24.864482 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a54cdd9b-a5e8-4eb2-970f-a5034d59937b-tigera-ca-bundle\") pod \"calico-typha-5d4dccdd6-bjwhl\" (UID: \"a54cdd9b-a5e8-4eb2-970f-a5034d59937b\") " pod="calico-system/calico-typha-5d4dccdd6-bjwhl" Dec 16 12:28:24.946032 systemd[1]: Created slice kubepods-besteffort-podef6336a6_c43a_49aa_82bb_1770205601a9.slice - libcontainer container kubepods-besteffort-podef6336a6_c43a_49aa_82bb_1770205601a9.slice. Dec 16 12:28:25.065916 kubelet[3334]: I1216 12:28:25.065743 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ef6336a6-c43a-49aa-82bb-1770205601a9-flexvol-driver-host\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.065916 kubelet[3334]: I1216 12:28:25.065827 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqpzm\" (UniqueName: \"kubernetes.io/projected/ef6336a6-c43a-49aa-82bb-1770205601a9-kube-api-access-qqpzm\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.065916 kubelet[3334]: I1216 12:28:25.065869 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ef6336a6-c43a-49aa-82bb-1770205601a9-cni-log-dir\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.065916 kubelet[3334]: I1216 12:28:25.065904 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ef6336a6-c43a-49aa-82bb-1770205601a9-node-certs\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.066225 kubelet[3334]: I1216 12:28:25.065942 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef6336a6-c43a-49aa-82bb-1770205601a9-var-lib-calico\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.066225 kubelet[3334]: I1216 12:28:25.066007 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ef6336a6-c43a-49aa-82bb-1770205601a9-cni-bin-dir\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.066225 kubelet[3334]: I1216 12:28:25.066044 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ef6336a6-c43a-49aa-82bb-1770205601a9-cni-net-dir\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.066225 kubelet[3334]: I1216 12:28:25.066084 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ef6336a6-c43a-49aa-82bb-1770205601a9-var-run-calico\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.066225 kubelet[3334]: I1216 12:28:25.066124 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef6336a6-c43a-49aa-82bb-1770205601a9-lib-modules\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.066476 kubelet[3334]: I1216 12:28:25.066165 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef6336a6-c43a-49aa-82bb-1770205601a9-xtables-lock\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.066476 kubelet[3334]: I1216 12:28:25.066207 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef6336a6-c43a-49aa-82bb-1770205601a9-tigera-ca-bundle\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.066476 kubelet[3334]: I1216 12:28:25.066243 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ef6336a6-c43a-49aa-82bb-1770205601a9-policysync\") pod \"calico-node-pmx42\" (UID: \"ef6336a6-c43a-49aa-82bb-1770205601a9\") " pod="calico-system/calico-node-pmx42" Dec 16 12:28:25.072832 kubelet[3334]: E1216 12:28:25.072624 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:28:25.167587 kubelet[3334]: I1216 12:28:25.167474 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0821a17d-3c03-4228-a061-1c97b86f544e-socket-dir\") pod \"csi-node-driver-qps4l\" (UID: \"0821a17d-3c03-4228-a061-1c97b86f544e\") " pod="calico-system/csi-node-driver-qps4l" Dec 16 12:28:25.167587 kubelet[3334]: I1216 12:28:25.167551 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4f5v\" (UniqueName: \"kubernetes.io/projected/0821a17d-3c03-4228-a061-1c97b86f544e-kube-api-access-c4f5v\") pod \"csi-node-driver-qps4l\" (UID: \"0821a17d-3c03-4228-a061-1c97b86f544e\") " pod="calico-system/csi-node-driver-qps4l" Dec 16 12:28:25.168859 kubelet[3334]: I1216 12:28:25.167635 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0821a17d-3c03-4228-a061-1c97b86f544e-registration-dir\") pod \"csi-node-driver-qps4l\" (UID: \"0821a17d-3c03-4228-a061-1c97b86f544e\") " pod="calico-system/csi-node-driver-qps4l" Dec 16 12:28:25.168859 kubelet[3334]: I1216 12:28:25.167692 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0821a17d-3c03-4228-a061-1c97b86f544e-varrun\") pod \"csi-node-driver-qps4l\" (UID: \"0821a17d-3c03-4228-a061-1c97b86f544e\") " pod="calico-system/csi-node-driver-qps4l" Dec 16 12:28:25.168859 kubelet[3334]: I1216 12:28:25.167750 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0821a17d-3c03-4228-a061-1c97b86f544e-kubelet-dir\") pod \"csi-node-driver-qps4l\" (UID: \"0821a17d-3c03-4228-a061-1c97b86f544e\") " pod="calico-system/csi-node-driver-qps4l" Dec 16 12:28:25.181036 kubelet[3334]: E1216 12:28:25.180262 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.181036 kubelet[3334]: W1216 12:28:25.180305 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.181036 kubelet[3334]: E1216 12:28:25.180342 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.182182 kubelet[3334]: E1216 12:28:25.182137 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.182182 kubelet[3334]: W1216 12:28:25.182174 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.182325 kubelet[3334]: E1216 12:28:25.182208 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.214465 kubelet[3334]: E1216 12:28:25.214335 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.214465 kubelet[3334]: W1216 12:28:25.214371 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.214465 kubelet[3334]: E1216 12:28:25.214404 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.261718 containerd[2004]: time="2025-12-16T12:28:25.261637323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmx42,Uid:ef6336a6-c43a-49aa-82bb-1770205601a9,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:25.268667 kubelet[3334]: E1216 12:28:25.268612 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.269006 kubelet[3334]: W1216 12:28:25.268844 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.269006 kubelet[3334]: E1216 12:28:25.268885 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.270759 kubelet[3334]: E1216 12:28:25.269506 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.270759 kubelet[3334]: W1216 12:28:25.269527 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.270759 kubelet[3334]: E1216 12:28:25.269550 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.270759 kubelet[3334]: E1216 12:28:25.270046 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.270759 kubelet[3334]: W1216 12:28:25.270065 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.270759 kubelet[3334]: E1216 12:28:25.270086 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.270759 kubelet[3334]: E1216 12:28:25.270516 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.270759 kubelet[3334]: W1216 12:28:25.270533 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.270759 kubelet[3334]: E1216 12:28:25.270554 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.271811 kubelet[3334]: E1216 12:28:25.271635 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.271811 kubelet[3334]: W1216 12:28:25.271657 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.271811 kubelet[3334]: E1216 12:28:25.271685 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.272405 kubelet[3334]: E1216 12:28:25.272382 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.272647 kubelet[3334]: W1216 12:28:25.272578 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.272647 kubelet[3334]: E1216 12:28:25.272610 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.273350 kubelet[3334]: E1216 12:28:25.273269 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.273350 kubelet[3334]: W1216 12:28:25.273296 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.273350 kubelet[3334]: E1216 12:28:25.273322 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.274055 kubelet[3334]: E1216 12:28:25.273932 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.274055 kubelet[3334]: W1216 12:28:25.273954 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.274055 kubelet[3334]: E1216 12:28:25.274026 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.274832 kubelet[3334]: E1216 12:28:25.274643 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.274832 kubelet[3334]: W1216 12:28:25.274666 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.274832 kubelet[3334]: E1216 12:28:25.274686 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.275199 kubelet[3334]: E1216 12:28:25.275177 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.275315 kubelet[3334]: W1216 12:28:25.275292 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.275418 kubelet[3334]: E1216 12:28:25.275396 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.276027 kubelet[3334]: E1216 12:28:25.275857 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.276027 kubelet[3334]: W1216 12:28:25.275882 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.276027 kubelet[3334]: E1216 12:28:25.275905 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.276318 kubelet[3334]: E1216 12:28:25.276288 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.276391 kubelet[3334]: W1216 12:28:25.276316 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.276391 kubelet[3334]: E1216 12:28:25.276340 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.276876 kubelet[3334]: E1216 12:28:25.276733 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.276876 kubelet[3334]: W1216 12:28:25.276760 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.276876 kubelet[3334]: E1216 12:28:25.276784 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.278326 kubelet[3334]: E1216 12:28:25.277178 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.278326 kubelet[3334]: W1216 12:28:25.277195 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.278326 kubelet[3334]: E1216 12:28:25.277218 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.278326 kubelet[3334]: E1216 12:28:25.277520 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.278326 kubelet[3334]: W1216 12:28:25.277535 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.278326 kubelet[3334]: E1216 12:28:25.277554 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.278326 kubelet[3334]: E1216 12:28:25.277842 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.278326 kubelet[3334]: W1216 12:28:25.277856 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.278326 kubelet[3334]: E1216 12:28:25.277874 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.279049 kubelet[3334]: E1216 12:28:25.279006 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.279049 kubelet[3334]: W1216 12:28:25.279037 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.279176 kubelet[3334]: E1216 12:28:25.279064 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.279452 kubelet[3334]: E1216 12:28:25.279425 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.279512 kubelet[3334]: W1216 12:28:25.279450 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.279512 kubelet[3334]: E1216 12:28:25.279474 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.279834 kubelet[3334]: E1216 12:28:25.279807 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.279900 kubelet[3334]: W1216 12:28:25.279832 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.279900 kubelet[3334]: E1216 12:28:25.279856 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.280921 kubelet[3334]: E1216 12:28:25.280882 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.280921 kubelet[3334]: W1216 12:28:25.280920 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.281113 kubelet[3334]: E1216 12:28:25.280951 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.281472 kubelet[3334]: E1216 12:28:25.281444 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.281578 kubelet[3334]: W1216 12:28:25.281470 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.281578 kubelet[3334]: E1216 12:28:25.281511 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.281902 kubelet[3334]: E1216 12:28:25.281872 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.282030 kubelet[3334]: W1216 12:28:25.281900 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.282030 kubelet[3334]: E1216 12:28:25.281924 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.282717 kubelet[3334]: E1216 12:28:25.282612 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.282717 kubelet[3334]: W1216 12:28:25.282671 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.282717 kubelet[3334]: E1216 12:28:25.282702 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.284026 kubelet[3334]: E1216 12:28:25.283455 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.284026 kubelet[3334]: W1216 12:28:25.283492 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.284026 kubelet[3334]: E1216 12:28:25.283546 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.284286 kubelet[3334]: E1216 12:28:25.284164 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.284286 kubelet[3334]: W1216 12:28:25.284187 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.284286 kubelet[3334]: E1216 12:28:25.284245 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.307270 kubelet[3334]: E1216 12:28:25.307091 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.307270 kubelet[3334]: W1216 12:28:25.307150 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.307270 kubelet[3334]: E1216 12:28:25.307201 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:25.322808 containerd[2004]: time="2025-12-16T12:28:25.320916951Z" level=info msg="connecting to shim 700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b" address="unix:///run/containerd/s/e552941ee230f840568921d42033af4d6a803e42f527b97a39a417cf91dd7254" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:25.375577 systemd[1]: Started cri-containerd-700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b.scope - libcontainer container 700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b. Dec 16 12:28:25.425659 containerd[2004]: time="2025-12-16T12:28:25.425596492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmx42,Uid:ef6336a6-c43a-49aa-82bb-1770205601a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b\"" Dec 16 12:28:25.429406 containerd[2004]: time="2025-12-16T12:28:25.429300856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:28:25.965125 kubelet[3334]: E1216 12:28:25.965067 3334 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Dec 16 12:28:25.965947 kubelet[3334]: E1216 12:28:25.965245 3334 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a54cdd9b-a5e8-4eb2-970f-a5034d59937b-typha-certs podName:a54cdd9b-a5e8-4eb2-970f-a5034d59937b nodeName:}" failed. No retries permitted until 2025-12-16 12:28:26.465206963 +0000 UTC m=+38.028332572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/a54cdd9b-a5e8-4eb2-970f-a5034d59937b-typha-certs") pod "calico-typha-5d4dccdd6-bjwhl" (UID: "a54cdd9b-a5e8-4eb2-970f-a5034d59937b") : failed to sync secret cache: timed out waiting for the condition Dec 16 12:28:25.979894 kubelet[3334]: E1216 12:28:25.979832 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:25.980178 kubelet[3334]: W1216 12:28:25.979865 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:25.980178 kubelet[3334]: E1216 12:28:25.980048 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.081904 kubelet[3334]: E1216 12:28:26.081845 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.081904 kubelet[3334]: W1216 12:28:26.081884 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.082161 kubelet[3334]: E1216 12:28:26.081916 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.183750 kubelet[3334]: E1216 12:28:26.183697 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.183903 kubelet[3334]: W1216 12:28:26.183755 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.183903 kubelet[3334]: E1216 12:28:26.183788 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.285156 kubelet[3334]: E1216 12:28:26.284526 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.285156 kubelet[3334]: W1216 12:28:26.284675 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.285156 kubelet[3334]: E1216 12:28:26.284714 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.387007 kubelet[3334]: E1216 12:28:26.386889 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.387007 kubelet[3334]: W1216 12:28:26.386924 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.387409 kubelet[3334]: E1216 12:28:26.387241 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.490574 kubelet[3334]: E1216 12:28:26.490483 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.490760 kubelet[3334]: W1216 12:28:26.490609 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.490760 kubelet[3334]: E1216 12:28:26.490648 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.491936 kubelet[3334]: E1216 12:28:26.491891 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.494400 kubelet[3334]: W1216 12:28:26.494015 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.494400 kubelet[3334]: E1216 12:28:26.494068 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.494845 kubelet[3334]: E1216 12:28:26.494819 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.496499 kubelet[3334]: W1216 12:28:26.494952 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.496499 kubelet[3334]: E1216 12:28:26.496097 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.497157 kubelet[3334]: E1216 12:28:26.496833 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.497157 kubelet[3334]: W1216 12:28:26.496858 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.497157 kubelet[3334]: E1216 12:28:26.496910 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.497646 kubelet[3334]: E1216 12:28:26.497622 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.497773 kubelet[3334]: W1216 12:28:26.497750 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.497910 kubelet[3334]: E1216 12:28:26.497889 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.514002 kubelet[3334]: E1216 12:28:26.513381 3334 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:28:26.514773 kubelet[3334]: W1216 12:28:26.514305 3334 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:28:26.514773 kubelet[3334]: E1216 12:28:26.514354 3334 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:28:26.572030 containerd[2004]: time="2025-12-16T12:28:26.571802310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d4dccdd6-bjwhl,Uid:a54cdd9b-a5e8-4eb2-970f-a5034d59937b,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:26.586639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3415760910.mount: Deactivated successfully. Dec 16 12:28:26.634013 containerd[2004]: time="2025-12-16T12:28:26.633027366Z" level=info msg="connecting to shim 529bb0efd67b96432c545928d5f72bd91a6cd7eb1d9a729919de006074af1086" address="unix:///run/containerd/s/72c98700d2d0146ab02d8a8f70ff6375709630f9477133f17a690d67fb1909ab" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:26.687307 systemd[1]: Started cri-containerd-529bb0efd67b96432c545928d5f72bd91a6cd7eb1d9a729919de006074af1086.scope - libcontainer container 529bb0efd67b96432c545928d5f72bd91a6cd7eb1d9a729919de006074af1086. Dec 16 12:28:26.824482 containerd[2004]: time="2025-12-16T12:28:26.824327299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d4dccdd6-bjwhl,Uid:a54cdd9b-a5e8-4eb2-970f-a5034d59937b,Namespace:calico-system,Attempt:0,} returns sandbox id \"529bb0efd67b96432c545928d5f72bd91a6cd7eb1d9a729919de006074af1086\"" Dec 16 12:28:26.827879 containerd[2004]: time="2025-12-16T12:28:26.827658247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:26.830180 containerd[2004]: time="2025-12-16T12:28:26.830106871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Dec 16 12:28:26.832353 containerd[2004]: time="2025-12-16T12:28:26.832260367Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:26.838242 kubelet[3334]: E1216 12:28:26.837488 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:28:26.843904 containerd[2004]: time="2025-12-16T12:28:26.841930027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:26.843904 containerd[2004]: time="2025-12-16T12:28:26.842774767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.413381523s" Dec 16 12:28:26.843904 containerd[2004]: time="2025-12-16T12:28:26.842831563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:28:26.853375 containerd[2004]: time="2025-12-16T12:28:26.853322299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:28:26.863707 containerd[2004]: time="2025-12-16T12:28:26.863645023Z" level=info msg="CreateContainer within sandbox \"700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:28:26.883750 containerd[2004]: time="2025-12-16T12:28:26.883678207Z" level=info msg="Container 7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:26.902235 containerd[2004]: time="2025-12-16T12:28:26.902155351Z" level=info msg="CreateContainer within sandbox \"700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75\"" Dec 16 12:28:26.905007 containerd[2004]: time="2025-12-16T12:28:26.903294523Z" level=info msg="StartContainer for \"7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75\"" Dec 16 12:28:26.906477 containerd[2004]: time="2025-12-16T12:28:26.906421003Z" level=info msg="connecting to shim 7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75" address="unix:///run/containerd/s/e552941ee230f840568921d42033af4d6a803e42f527b97a39a417cf91dd7254" protocol=ttrpc version=3 Dec 16 12:28:26.950311 systemd[1]: Started cri-containerd-7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75.scope - libcontainer container 7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75. Dec 16 12:28:27.111927 containerd[2004]: time="2025-12-16T12:28:27.111793300Z" level=info msg="StartContainer for \"7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75\" returns successfully" Dec 16 12:28:27.154584 systemd[1]: cri-containerd-7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75.scope: Deactivated successfully. Dec 16 12:28:27.163485 containerd[2004]: time="2025-12-16T12:28:27.163405756Z" level=info msg="received container exit event container_id:\"7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75\" id:\"7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75\" pid:4179 exited_at:{seconds:1765888107 nanos:162722152}" Dec 16 12:28:27.205237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7956a731f3a5b07ea9523e7793102df0ef84831ee318cc0c8c6633841db73c75-rootfs.mount: Deactivated successfully. Dec 16 12:28:28.817591 containerd[2004]: time="2025-12-16T12:28:28.817527909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:28.819407 containerd[2004]: time="2025-12-16T12:28:28.819336645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Dec 16 12:28:28.820560 containerd[2004]: time="2025-12-16T12:28:28.820184181Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:28.824994 containerd[2004]: time="2025-12-16T12:28:28.824925129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:28.826349 containerd[2004]: time="2025-12-16T12:28:28.826298625Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.970857378s" Dec 16 12:28:28.826521 containerd[2004]: time="2025-12-16T12:28:28.826490577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:28:28.830633 containerd[2004]: time="2025-12-16T12:28:28.830505741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:28:28.849615 kubelet[3334]: E1216 12:28:28.849278 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:28:28.867510 containerd[2004]: time="2025-12-16T12:28:28.867449781Z" level=info msg="CreateContainer within sandbox \"529bb0efd67b96432c545928d5f72bd91a6cd7eb1d9a729919de006074af1086\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:28:28.882012 containerd[2004]: time="2025-12-16T12:28:28.880353381Z" level=info msg="Container eddcb601f77d9e92e9d20d4f6fcd890d5c3852786036452d9bd97371f694c624: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:28.895150 containerd[2004]: time="2025-12-16T12:28:28.894931137Z" level=info msg="CreateContainer within sandbox \"529bb0efd67b96432c545928d5f72bd91a6cd7eb1d9a729919de006074af1086\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"eddcb601f77d9e92e9d20d4f6fcd890d5c3852786036452d9bd97371f694c624\"" Dec 16 12:28:28.897058 containerd[2004]: time="2025-12-16T12:28:28.896137869Z" level=info msg="StartContainer for \"eddcb601f77d9e92e9d20d4f6fcd890d5c3852786036452d9bd97371f694c624\"" Dec 16 12:28:28.902422 containerd[2004]: time="2025-12-16T12:28:28.902291109Z" level=info msg="connecting to shim eddcb601f77d9e92e9d20d4f6fcd890d5c3852786036452d9bd97371f694c624" address="unix:///run/containerd/s/72c98700d2d0146ab02d8a8f70ff6375709630f9477133f17a690d67fb1909ab" protocol=ttrpc version=3 Dec 16 12:28:28.956374 systemd[1]: Started cri-containerd-eddcb601f77d9e92e9d20d4f6fcd890d5c3852786036452d9bd97371f694c624.scope - libcontainer container eddcb601f77d9e92e9d20d4f6fcd890d5c3852786036452d9bd97371f694c624. Dec 16 12:28:29.059201 containerd[2004]: time="2025-12-16T12:28:29.059081106Z" level=info msg="StartContainer for \"eddcb601f77d9e92e9d20d4f6fcd890d5c3852786036452d9bd97371f694c624\" returns successfully" Dec 16 12:28:29.148111 kubelet[3334]: I1216 12:28:29.147175 3334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d4dccdd6-bjwhl" podStartSLOduration=3.14575234 podStartE2EDuration="5.147152562s" podCreationTimestamp="2025-12-16 12:28:24 +0000 UTC" firstStartedPulling="2025-12-16 12:28:26.827203531 +0000 UTC m=+38.390329080" lastFinishedPulling="2025-12-16 12:28:28.828603729 +0000 UTC m=+40.391729302" observedRunningTime="2025-12-16 12:28:29.141695886 +0000 UTC m=+40.704821483" watchObservedRunningTime="2025-12-16 12:28:29.147152562 +0000 UTC m=+40.710278111" Dec 16 12:28:30.836629 kubelet[3334]: E1216 12:28:30.836495 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:28:31.878958 containerd[2004]: time="2025-12-16T12:28:31.878250000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:31.880522 containerd[2004]: time="2025-12-16T12:28:31.880474020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 16 12:28:31.882536 containerd[2004]: time="2025-12-16T12:28:31.882489252Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:31.887448 containerd[2004]: time="2025-12-16T12:28:31.887354652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:31.889591 containerd[2004]: time="2025-12-16T12:28:31.888709776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.058139751s" Dec 16 12:28:31.889591 containerd[2004]: time="2025-12-16T12:28:31.888770280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:28:31.901051 containerd[2004]: time="2025-12-16T12:28:31.901001604Z" level=info msg="CreateContainer within sandbox \"700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:28:31.924445 containerd[2004]: time="2025-12-16T12:28:31.924258408Z" level=info msg="Container fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:31.930709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652139151.mount: Deactivated successfully. Dec 16 12:28:31.953648 containerd[2004]: time="2025-12-16T12:28:31.953566488Z" level=info msg="CreateContainer within sandbox \"700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0\"" Dec 16 12:28:31.955768 containerd[2004]: time="2025-12-16T12:28:31.955164768Z" level=info msg="StartContainer for \"fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0\"" Dec 16 12:28:31.958717 containerd[2004]: time="2025-12-16T12:28:31.958655028Z" level=info msg="connecting to shim fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0" address="unix:///run/containerd/s/e552941ee230f840568921d42033af4d6a803e42f527b97a39a417cf91dd7254" protocol=ttrpc version=3 Dec 16 12:28:32.002349 systemd[1]: Started cri-containerd-fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0.scope - libcontainer container fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0. Dec 16 12:28:32.117424 containerd[2004]: time="2025-12-16T12:28:32.117374745Z" level=info msg="StartContainer for \"fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0\" returns successfully" Dec 16 12:28:32.838390 kubelet[3334]: E1216 12:28:32.838309 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:28:33.238822 containerd[2004]: time="2025-12-16T12:28:33.238319663Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:28:33.243092 systemd[1]: cri-containerd-fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0.scope: Deactivated successfully. Dec 16 12:28:33.246594 systemd[1]: cri-containerd-fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0.scope: Consumed 945ms CPU time, 185.8M memory peak, 165.9M written to disk. Dec 16 12:28:33.250596 containerd[2004]: time="2025-12-16T12:28:33.250509191Z" level=info msg="received container exit event container_id:\"fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0\" id:\"fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0\" pid:4284 exited_at:{seconds:1765888113 nanos:250155815}" Dec 16 12:28:33.286208 kubelet[3334]: I1216 12:28:33.285077 3334 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 12:28:33.308355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd12bd9aa8681414561d87cbc1ff8ccb5aed2d48d119991d35311009e50894f0-rootfs.mount: Deactivated successfully. Dec 16 12:28:33.416266 systemd[1]: Created slice kubepods-besteffort-podc54f17de_062d_4ea0_b0e3_144077363c3e.slice - libcontainer container kubepods-besteffort-podc54f17de_062d_4ea0_b0e3_144077363c3e.slice. Dec 16 12:28:33.464029 systemd[1]: Created slice kubepods-besteffort-pod4b0a11dd_5ec4_458d_86dc_437a0146fd85.slice - libcontainer container kubepods-besteffort-pod4b0a11dd_5ec4_458d_86dc_437a0146fd85.slice. Dec 16 12:28:33.519227 systemd[1]: Created slice kubepods-burstable-pod62f184b6_8041_4d07_8a90_e198a52ad38e.slice - libcontainer container kubepods-burstable-pod62f184b6_8041_4d07_8a90_e198a52ad38e.slice. Dec 16 12:28:33.545583 kubelet[3334]: I1216 12:28:33.545508 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c54f17de-062d-4ea0-b0e3-144077363c3e-tigera-ca-bundle\") pod \"calico-kube-controllers-57cf7db4b7-6r27k\" (UID: \"c54f17de-062d-4ea0-b0e3-144077363c3e\") " pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" Dec 16 12:28:33.545782 kubelet[3334]: I1216 12:28:33.545657 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g96bm\" (UniqueName: \"kubernetes.io/projected/c54f17de-062d-4ea0-b0e3-144077363c3e-kube-api-access-g96bm\") pod \"calico-kube-controllers-57cf7db4b7-6r27k\" (UID: \"c54f17de-062d-4ea0-b0e3-144077363c3e\") " pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" Dec 16 12:28:33.545782 kubelet[3334]: I1216 12:28:33.545733 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b0a11dd-5ec4-458d-86dc-437a0146fd85-calico-apiserver-certs\") pod \"calico-apiserver-7c59c4c686-xwscr\" (UID: \"4b0a11dd-5ec4-458d-86dc-437a0146fd85\") " pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" Dec 16 12:28:33.545782 kubelet[3334]: I1216 12:28:33.545776 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f2cm\" (UniqueName: \"kubernetes.io/projected/4b0a11dd-5ec4-458d-86dc-437a0146fd85-kube-api-access-6f2cm\") pod \"calico-apiserver-7c59c4c686-xwscr\" (UID: \"4b0a11dd-5ec4-458d-86dc-437a0146fd85\") " pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" Dec 16 12:28:33.568709 systemd[1]: Created slice kubepods-burstable-pod8b41ae06_ee3e_4832_a16f_282cefaf725a.slice - libcontainer container kubepods-burstable-pod8b41ae06_ee3e_4832_a16f_282cefaf725a.slice. Dec 16 12:28:33.624906 systemd[1]: Created slice kubepods-besteffort-podb28336ef_bcfa_4481_a4ab_447af79aaaba.slice - libcontainer container kubepods-besteffort-podb28336ef_bcfa_4481_a4ab_447af79aaaba.slice. Dec 16 12:28:33.646409 kubelet[3334]: I1216 12:28:33.646348 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcf74\" (UniqueName: \"kubernetes.io/projected/62f184b6-8041-4d07-8a90-e198a52ad38e-kube-api-access-jcf74\") pod \"coredns-66bc5c9577-w6lcv\" (UID: \"62f184b6-8041-4d07-8a90-e198a52ad38e\") " pod="kube-system/coredns-66bc5c9577-w6lcv" Dec 16 12:28:33.646626 kubelet[3334]: I1216 12:28:33.646446 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62f184b6-8041-4d07-8a90-e198a52ad38e-config-volume\") pod \"coredns-66bc5c9577-w6lcv\" (UID: \"62f184b6-8041-4d07-8a90-e198a52ad38e\") " pod="kube-system/coredns-66bc5c9577-w6lcv" Dec 16 12:28:33.646626 kubelet[3334]: I1216 12:28:33.646519 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b41ae06-ee3e-4832-a16f-282cefaf725a-config-volume\") pod \"coredns-66bc5c9577-mrbsz\" (UID: \"8b41ae06-ee3e-4832-a16f-282cefaf725a\") " pod="kube-system/coredns-66bc5c9577-mrbsz" Dec 16 12:28:33.646626 kubelet[3334]: I1216 12:28:33.646596 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlvz4\" (UniqueName: \"kubernetes.io/projected/8b41ae06-ee3e-4832-a16f-282cefaf725a-kube-api-access-wlvz4\") pod \"coredns-66bc5c9577-mrbsz\" (UID: \"8b41ae06-ee3e-4832-a16f-282cefaf725a\") " pod="kube-system/coredns-66bc5c9577-mrbsz" Dec 16 12:28:33.736852 systemd[1]: Created slice kubepods-besteffort-pod498c6616_13e2_4682_b7b0_5dc0ae0967ac.slice - libcontainer container kubepods-besteffort-pod498c6616_13e2_4682_b7b0_5dc0ae0967ac.slice. Dec 16 12:28:33.748015 kubelet[3334]: I1216 12:28:33.747684 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stdpc\" (UniqueName: \"kubernetes.io/projected/b28336ef-bcfa-4481-a4ab-447af79aaaba-kube-api-access-stdpc\") pod \"calico-apiserver-7c59c4c686-nm8c9\" (UID: \"b28336ef-bcfa-4481-a4ab-447af79aaaba\") " pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" Dec 16 12:28:33.748015 kubelet[3334]: I1216 12:28:33.747811 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/498c6616-13e2-4682-b7b0-5dc0ae0967ac-whisker-backend-key-pair\") pod \"whisker-84db8877d9-gk5qn\" (UID: \"498c6616-13e2-4682-b7b0-5dc0ae0967ac\") " pod="calico-system/whisker-84db8877d9-gk5qn" Dec 16 12:28:33.748015 kubelet[3334]: I1216 12:28:33.747882 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/498c6616-13e2-4682-b7b0-5dc0ae0967ac-whisker-ca-bundle\") pod \"whisker-84db8877d9-gk5qn\" (UID: \"498c6616-13e2-4682-b7b0-5dc0ae0967ac\") " pod="calico-system/whisker-84db8877d9-gk5qn" Dec 16 12:28:33.748015 kubelet[3334]: I1216 12:28:33.747919 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b28336ef-bcfa-4481-a4ab-447af79aaaba-calico-apiserver-certs\") pod \"calico-apiserver-7c59c4c686-nm8c9\" (UID: \"b28336ef-bcfa-4481-a4ab-447af79aaaba\") " pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" Dec 16 12:28:33.749675 kubelet[3334]: I1216 12:28:33.749598 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hjx9\" (UniqueName: \"kubernetes.io/projected/498c6616-13e2-4682-b7b0-5dc0ae0967ac-kube-api-access-9hjx9\") pod \"whisker-84db8877d9-gk5qn\" (UID: \"498c6616-13e2-4682-b7b0-5dc0ae0967ac\") " pod="calico-system/whisker-84db8877d9-gk5qn" Dec 16 12:28:33.806896 systemd[1]: Created slice kubepods-besteffort-pod59a087e1_e448_441a_b97a_fe80bf31dd45.slice - libcontainer container kubepods-besteffort-pod59a087e1_e448_441a_b97a_fe80bf31dd45.slice. Dec 16 12:28:33.820644 containerd[2004]: time="2025-12-16T12:28:33.820567958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57cf7db4b7-6r27k,Uid:c54f17de-062d-4ea0-b0e3-144077363c3e,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:33.851051 kubelet[3334]: I1216 12:28:33.850149 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/59a087e1-e448-441a-b97a-fe80bf31dd45-goldmane-key-pair\") pod \"goldmane-7c778bb748-m7qfq\" (UID: \"59a087e1-e448-441a-b97a-fe80bf31dd45\") " pod="calico-system/goldmane-7c778bb748-m7qfq" Dec 16 12:28:33.851051 kubelet[3334]: I1216 12:28:33.850221 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtfh\" (UniqueName: \"kubernetes.io/projected/59a087e1-e448-441a-b97a-fe80bf31dd45-kube-api-access-fwtfh\") pod \"goldmane-7c778bb748-m7qfq\" (UID: \"59a087e1-e448-441a-b97a-fe80bf31dd45\") " pod="calico-system/goldmane-7c778bb748-m7qfq" Dec 16 12:28:33.851051 kubelet[3334]: I1216 12:28:33.850288 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a087e1-e448-441a-b97a-fe80bf31dd45-config\") pod \"goldmane-7c778bb748-m7qfq\" (UID: \"59a087e1-e448-441a-b97a-fe80bf31dd45\") " pod="calico-system/goldmane-7c778bb748-m7qfq" Dec 16 12:28:33.851051 kubelet[3334]: I1216 12:28:33.850371 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59a087e1-e448-441a-b97a-fe80bf31dd45-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-m7qfq\" (UID: \"59a087e1-e448-441a-b97a-fe80bf31dd45\") " pod="calico-system/goldmane-7c778bb748-m7qfq" Dec 16 12:28:33.889226 containerd[2004]: time="2025-12-16T12:28:33.889145282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c59c4c686-xwscr,Uid:4b0a11dd-5ec4-458d-86dc-437a0146fd85,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:28:33.978431 containerd[2004]: time="2025-12-16T12:28:33.978357950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w6lcv,Uid:62f184b6-8041-4d07-8a90-e198a52ad38e,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:34.029553 containerd[2004]: time="2025-12-16T12:28:34.029392823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrbsz,Uid:8b41ae06-ee3e-4832-a16f-282cefaf725a,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:34.101343 containerd[2004]: time="2025-12-16T12:28:34.100594883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c59c4c686-nm8c9,Uid:b28336ef-bcfa-4481-a4ab-447af79aaaba,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:28:34.152275 containerd[2004]: time="2025-12-16T12:28:34.152216567Z" level=error msg="Failed to destroy network for sandbox \"8ba534580a8aaf5c2c41adce86e9eb06e4ff63c74803bacc31d5fb2450206e49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.179959 containerd[2004]: time="2025-12-16T12:28:34.179810915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84db8877d9-gk5qn,Uid:498c6616-13e2-4682-b7b0-5dc0ae0967ac,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:34.247863 containerd[2004]: time="2025-12-16T12:28:34.247564308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-m7qfq,Uid:59a087e1-e448-441a-b97a-fe80bf31dd45,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:34.311464 containerd[2004]: time="2025-12-16T12:28:34.311349888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57cf7db4b7-6r27k,Uid:c54f17de-062d-4ea0-b0e3-144077363c3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ba534580a8aaf5c2c41adce86e9eb06e4ff63c74803bacc31d5fb2450206e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.315290 kubelet[3334]: E1216 12:28:34.315192 3334 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ba534580a8aaf5c2c41adce86e9eb06e4ff63c74803bacc31d5fb2450206e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.315527 kubelet[3334]: E1216 12:28:34.315297 3334 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ba534580a8aaf5c2c41adce86e9eb06e4ff63c74803bacc31d5fb2450206e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" Dec 16 12:28:34.315527 kubelet[3334]: E1216 12:28:34.315332 3334 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ba534580a8aaf5c2c41adce86e9eb06e4ff63c74803bacc31d5fb2450206e49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" Dec 16 12:28:34.315527 kubelet[3334]: E1216 12:28:34.315423 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57cf7db4b7-6r27k_calico-system(c54f17de-062d-4ea0-b0e3-144077363c3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57cf7db4b7-6r27k_calico-system(c54f17de-062d-4ea0-b0e3-144077363c3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ba534580a8aaf5c2c41adce86e9eb06e4ff63c74803bacc31d5fb2450206e49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:28:34.694532 containerd[2004]: time="2025-12-16T12:28:34.694457954Z" level=error msg="Failed to destroy network for sandbox \"d1273f4acb0cb2762aec3735e7234abbdea6cdbdde7581205471d5be5a09811e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.701296 systemd[1]: run-netns-cni\x2d919606c3\x2d54e7\x2d0c4d\x2d03ec\x2db539b7358ffc.mount: Deactivated successfully. Dec 16 12:28:34.703514 containerd[2004]: time="2025-12-16T12:28:34.702660038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c59c4c686-xwscr,Uid:4b0a11dd-5ec4-458d-86dc-437a0146fd85,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1273f4acb0cb2762aec3735e7234abbdea6cdbdde7581205471d5be5a09811e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.704270 kubelet[3334]: E1216 12:28:34.704206 3334 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1273f4acb0cb2762aec3735e7234abbdea6cdbdde7581205471d5be5a09811e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.704893 kubelet[3334]: E1216 12:28:34.704821 3334 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1273f4acb0cb2762aec3735e7234abbdea6cdbdde7581205471d5be5a09811e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" Dec 16 12:28:34.705151 kubelet[3334]: E1216 12:28:34.705114 3334 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1273f4acb0cb2762aec3735e7234abbdea6cdbdde7581205471d5be5a09811e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" Dec 16 12:28:34.705536 kubelet[3334]: E1216 12:28:34.705373 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c59c4c686-xwscr_calico-apiserver(4b0a11dd-5ec4-458d-86dc-437a0146fd85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c59c4c686-xwscr_calico-apiserver(4b0a11dd-5ec4-458d-86dc-437a0146fd85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1273f4acb0cb2762aec3735e7234abbdea6cdbdde7581205471d5be5a09811e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:28:34.721937 containerd[2004]: time="2025-12-16T12:28:34.721864670Z" level=error msg="Failed to destroy network for sandbox \"4d1082344e8174742386a808e540a51cd0ee86cfc1045ea6ad7e147304582185\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.729551 systemd[1]: run-netns-cni\x2dba93b0c0\x2d05e9\x2d8b11\x2d3aaf\x2d1a986cfbf0fa.mount: Deactivated successfully. Dec 16 12:28:34.730265 containerd[2004]: time="2025-12-16T12:28:34.730025954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-m7qfq,Uid:59a087e1-e448-441a-b97a-fe80bf31dd45,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1082344e8174742386a808e540a51cd0ee86cfc1045ea6ad7e147304582185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.734267 kubelet[3334]: E1216 12:28:34.733675 3334 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1082344e8174742386a808e540a51cd0ee86cfc1045ea6ad7e147304582185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.734267 kubelet[3334]: E1216 12:28:34.733754 3334 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1082344e8174742386a808e540a51cd0ee86cfc1045ea6ad7e147304582185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-m7qfq" Dec 16 12:28:34.734267 kubelet[3334]: E1216 12:28:34.733789 3334 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1082344e8174742386a808e540a51cd0ee86cfc1045ea6ad7e147304582185\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-m7qfq" Dec 16 12:28:34.734557 kubelet[3334]: E1216 12:28:34.733872 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-m7qfq_calico-system(59a087e1-e448-441a-b97a-fe80bf31dd45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-m7qfq_calico-system(59a087e1-e448-441a-b97a-fe80bf31dd45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d1082344e8174742386a808e540a51cd0ee86cfc1045ea6ad7e147304582185\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-m7qfq" podUID="59a087e1-e448-441a-b97a-fe80bf31dd45" Dec 16 12:28:34.737675 containerd[2004]: time="2025-12-16T12:28:34.737525822Z" level=error msg="Failed to destroy network for sandbox \"9719039a77a160b42321f5c12a5f18058de8bfc1248b9a0824458ded7056e069\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.738329 containerd[2004]: time="2025-12-16T12:28:34.738240290Z" level=error msg="Failed to destroy network for sandbox \"91ee60ffa4dfa2956f27e4fe305da7d840ffb4fe7ee475205a30282c42e42ce4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.744624 containerd[2004]: time="2025-12-16T12:28:34.744488402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrbsz,Uid:8b41ae06-ee3e-4832-a16f-282cefaf725a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9719039a77a160b42321f5c12a5f18058de8bfc1248b9a0824458ded7056e069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.745175 kubelet[3334]: E1216 12:28:34.744805 3334 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9719039a77a160b42321f5c12a5f18058de8bfc1248b9a0824458ded7056e069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.745175 kubelet[3334]: E1216 12:28:34.744880 3334 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9719039a77a160b42321f5c12a5f18058de8bfc1248b9a0824458ded7056e069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mrbsz" Dec 16 12:28:34.745175 kubelet[3334]: E1216 12:28:34.744919 3334 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9719039a77a160b42321f5c12a5f18058de8bfc1248b9a0824458ded7056e069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mrbsz" Dec 16 12:28:34.745423 kubelet[3334]: E1216 12:28:34.745034 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mrbsz_kube-system(8b41ae06-ee3e-4832-a16f-282cefaf725a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mrbsz_kube-system(8b41ae06-ee3e-4832-a16f-282cefaf725a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9719039a77a160b42321f5c12a5f18058de8bfc1248b9a0824458ded7056e069\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mrbsz" podUID="8b41ae06-ee3e-4832-a16f-282cefaf725a" Dec 16 12:28:34.747914 containerd[2004]: time="2025-12-16T12:28:34.747824714Z" level=error msg="Failed to destroy network for sandbox \"37e5455337e63ef02d999d024574ee85b15386c28bcf809b458029778b31009a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.748371 containerd[2004]: time="2025-12-16T12:28:34.747885794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c59c4c686-nm8c9,Uid:b28336ef-bcfa-4481-a4ab-447af79aaaba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ee60ffa4dfa2956f27e4fe305da7d840ffb4fe7ee475205a30282c42e42ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.749211 kubelet[3334]: E1216 12:28:34.748622 3334 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ee60ffa4dfa2956f27e4fe305da7d840ffb4fe7ee475205a30282c42e42ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.749211 kubelet[3334]: E1216 12:28:34.748691 3334 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ee60ffa4dfa2956f27e4fe305da7d840ffb4fe7ee475205a30282c42e42ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" Dec 16 12:28:34.749211 kubelet[3334]: E1216 12:28:34.748723 3334 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ee60ffa4dfa2956f27e4fe305da7d840ffb4fe7ee475205a30282c42e42ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" Dec 16 12:28:34.750599 kubelet[3334]: E1216 12:28:34.749069 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c59c4c686-nm8c9_calico-apiserver(b28336ef-bcfa-4481-a4ab-447af79aaaba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c59c4c686-nm8c9_calico-apiserver(b28336ef-bcfa-4481-a4ab-447af79aaaba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91ee60ffa4dfa2956f27e4fe305da7d840ffb4fe7ee475205a30282c42e42ce4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:28:34.753325 containerd[2004]: time="2025-12-16T12:28:34.753110846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84db8877d9-gk5qn,Uid:498c6616-13e2-4682-b7b0-5dc0ae0967ac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e5455337e63ef02d999d024574ee85b15386c28bcf809b458029778b31009a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.754621 kubelet[3334]: E1216 12:28:34.754106 3334 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e5455337e63ef02d999d024574ee85b15386c28bcf809b458029778b31009a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.754621 kubelet[3334]: E1216 12:28:34.754491 3334 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e5455337e63ef02d999d024574ee85b15386c28bcf809b458029778b31009a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84db8877d9-gk5qn" Dec 16 12:28:34.754621 kubelet[3334]: E1216 12:28:34.754532 3334 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e5455337e63ef02d999d024574ee85b15386c28bcf809b458029778b31009a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84db8877d9-gk5qn" Dec 16 12:28:34.755328 kubelet[3334]: E1216 12:28:34.754960 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84db8877d9-gk5qn_calico-system(498c6616-13e2-4682-b7b0-5dc0ae0967ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84db8877d9-gk5qn_calico-system(498c6616-13e2-4682-b7b0-5dc0ae0967ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37e5455337e63ef02d999d024574ee85b15386c28bcf809b458029778b31009a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84db8877d9-gk5qn" podUID="498c6616-13e2-4682-b7b0-5dc0ae0967ac" Dec 16 12:28:34.759403 containerd[2004]: time="2025-12-16T12:28:34.759307526Z" level=error msg="Failed to destroy network for sandbox \"0fe5e4e03a7c8c69887919986f3e54923bd61c93fa5f5a35b21d36bf9b4d5558\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.762548 containerd[2004]: time="2025-12-16T12:28:34.762248498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w6lcv,Uid:62f184b6-8041-4d07-8a90-e198a52ad38e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe5e4e03a7c8c69887919986f3e54923bd61c93fa5f5a35b21d36bf9b4d5558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.763307 kubelet[3334]: E1216 12:28:34.763227 3334 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe5e4e03a7c8c69887919986f3e54923bd61c93fa5f5a35b21d36bf9b4d5558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.763539 kubelet[3334]: E1216 12:28:34.763311 3334 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe5e4e03a7c8c69887919986f3e54923bd61c93fa5f5a35b21d36bf9b4d5558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w6lcv" Dec 16 12:28:34.763539 kubelet[3334]: E1216 12:28:34.763346 3334 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe5e4e03a7c8c69887919986f3e54923bd61c93fa5f5a35b21d36bf9b4d5558\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w6lcv" Dec 16 12:28:34.763539 kubelet[3334]: E1216 12:28:34.763433 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-w6lcv_kube-system(62f184b6-8041-4d07-8a90-e198a52ad38e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-w6lcv_kube-system(62f184b6-8041-4d07-8a90-e198a52ad38e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fe5e4e03a7c8c69887919986f3e54923bd61c93fa5f5a35b21d36bf9b4d5558\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w6lcv" podUID="62f184b6-8041-4d07-8a90-e198a52ad38e" Dec 16 12:28:34.851635 systemd[1]: Created slice kubepods-besteffort-pod0821a17d_3c03_4228_a061_1c97b86f544e.slice - libcontainer container kubepods-besteffort-pod0821a17d_3c03_4228_a061_1c97b86f544e.slice. Dec 16 12:28:34.860941 containerd[2004]: time="2025-12-16T12:28:34.860872779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qps4l,Uid:0821a17d-3c03-4228-a061-1c97b86f544e,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:34.962212 containerd[2004]: time="2025-12-16T12:28:34.961680135Z" level=error msg="Failed to destroy network for sandbox \"d368202a7a3a163f90dfc8f6c3b65fd151c5bd286a2e7a2c938de3de5e90128d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.965790 containerd[2004]: time="2025-12-16T12:28:34.964906983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qps4l,Uid:0821a17d-3c03-4228-a061-1c97b86f544e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d368202a7a3a163f90dfc8f6c3b65fd151c5bd286a2e7a2c938de3de5e90128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.965985 kubelet[3334]: E1216 12:28:34.965785 3334 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d368202a7a3a163f90dfc8f6c3b65fd151c5bd286a2e7a2c938de3de5e90128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:28:34.965985 kubelet[3334]: E1216 12:28:34.965864 3334 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d368202a7a3a163f90dfc8f6c3b65fd151c5bd286a2e7a2c938de3de5e90128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qps4l" Dec 16 12:28:34.965985 kubelet[3334]: E1216 12:28:34.965902 3334 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d368202a7a3a163f90dfc8f6c3b65fd151c5bd286a2e7a2c938de3de5e90128d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qps4l" Dec 16 12:28:34.966676 kubelet[3334]: E1216 12:28:34.966019 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d368202a7a3a163f90dfc8f6c3b65fd151c5bd286a2e7a2c938de3de5e90128d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:28:35.164669 containerd[2004]: time="2025-12-16T12:28:35.164353464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:28:35.305301 systemd[1]: run-netns-cni\x2d6741bb23\x2d2b62\x2d5017\x2dae45\x2d7bcd4b727429.mount: Deactivated successfully. Dec 16 12:28:35.305491 systemd[1]: run-netns-cni\x2daa200f40\x2da34f\x2d9232\x2d01b1\x2ddfecda4ff2a6.mount: Deactivated successfully. Dec 16 12:28:35.305639 systemd[1]: run-netns-cni\x2da1b612ec\x2d6786\x2df0bc\x2d05ff\x2d19e835e815c9.mount: Deactivated successfully. Dec 16 12:28:35.305758 systemd[1]: run-netns-cni\x2d9c5aa4ac\x2d975c\x2de2e6\x2d35a4\x2da43d09900ca9.mount: Deactivated successfully. Dec 16 12:28:41.822735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444235489.mount: Deactivated successfully. Dec 16 12:28:41.885316 containerd[2004]: time="2025-12-16T12:28:41.885214846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:41.887665 containerd[2004]: time="2025-12-16T12:28:41.887397538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 16 12:28:41.890171 containerd[2004]: time="2025-12-16T12:28:41.890104786Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:41.897884 containerd[2004]: time="2025-12-16T12:28:41.896366290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:28:41.897884 containerd[2004]: time="2025-12-16T12:28:41.897683218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.733005394s" Dec 16 12:28:41.897884 containerd[2004]: time="2025-12-16T12:28:41.897743434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:28:41.935909 containerd[2004]: time="2025-12-16T12:28:41.935857234Z" level=info msg="CreateContainer within sandbox \"700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:28:41.967000 containerd[2004]: time="2025-12-16T12:28:41.965230294Z" level=info msg="Container 5c9ac8e553f6457d28362238c83af0ab591addb7a7d862f4f23790f7084f73ae: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:41.996876 containerd[2004]: time="2025-12-16T12:28:41.996807634Z" level=info msg="CreateContainer within sandbox \"700795f14ccf729b68e0723d3e12006414a1838dc1f52283b68001007a386c5b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5c9ac8e553f6457d28362238c83af0ab591addb7a7d862f4f23790f7084f73ae\"" Dec 16 12:28:41.998497 containerd[2004]: time="2025-12-16T12:28:41.998234194Z" level=info msg="StartContainer for \"5c9ac8e553f6457d28362238c83af0ab591addb7a7d862f4f23790f7084f73ae\"" Dec 16 12:28:42.002812 containerd[2004]: time="2025-12-16T12:28:42.002698098Z" level=info msg="connecting to shim 5c9ac8e553f6457d28362238c83af0ab591addb7a7d862f4f23790f7084f73ae" address="unix:///run/containerd/s/e552941ee230f840568921d42033af4d6a803e42f527b97a39a417cf91dd7254" protocol=ttrpc version=3 Dec 16 12:28:42.048291 systemd[1]: Started cri-containerd-5c9ac8e553f6457d28362238c83af0ab591addb7a7d862f4f23790f7084f73ae.scope - libcontainer container 5c9ac8e553f6457d28362238c83af0ab591addb7a7d862f4f23790f7084f73ae. Dec 16 12:28:42.162824 containerd[2004]: time="2025-12-16T12:28:42.162759223Z" level=info msg="StartContainer for \"5c9ac8e553f6457d28362238c83af0ab591addb7a7d862f4f23790f7084f73ae\" returns successfully" Dec 16 12:28:42.491661 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:28:42.492487 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:28:42.743696 kubelet[3334]: I1216 12:28:42.743580 3334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pmx42" podStartSLOduration=2.27266362 podStartE2EDuration="18.743552866s" podCreationTimestamp="2025-12-16 12:28:24 +0000 UTC" firstStartedPulling="2025-12-16 12:28:25.428792596 +0000 UTC m=+36.991918145" lastFinishedPulling="2025-12-16 12:28:41.89968183 +0000 UTC m=+53.462807391" observedRunningTime="2025-12-16 12:28:42.260483323 +0000 UTC m=+53.823608884" watchObservedRunningTime="2025-12-16 12:28:42.743552866 +0000 UTC m=+54.306678415" Dec 16 12:28:42.938988 kubelet[3334]: I1216 12:28:42.938325 3334 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/498c6616-13e2-4682-b7b0-5dc0ae0967ac-whisker-backend-key-pair\") pod \"498c6616-13e2-4682-b7b0-5dc0ae0967ac\" (UID: \"498c6616-13e2-4682-b7b0-5dc0ae0967ac\") " Dec 16 12:28:42.938988 kubelet[3334]: I1216 12:28:42.938409 3334 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/498c6616-13e2-4682-b7b0-5dc0ae0967ac-whisker-ca-bundle\") pod \"498c6616-13e2-4682-b7b0-5dc0ae0967ac\" (UID: \"498c6616-13e2-4682-b7b0-5dc0ae0967ac\") " Dec 16 12:28:42.938988 kubelet[3334]: I1216 12:28:42.938446 3334 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hjx9\" (UniqueName: \"kubernetes.io/projected/498c6616-13e2-4682-b7b0-5dc0ae0967ac-kube-api-access-9hjx9\") pod \"498c6616-13e2-4682-b7b0-5dc0ae0967ac\" (UID: \"498c6616-13e2-4682-b7b0-5dc0ae0967ac\") " Dec 16 12:28:42.949737 kubelet[3334]: I1216 12:28:42.949661 3334 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498c6616-13e2-4682-b7b0-5dc0ae0967ac-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "498c6616-13e2-4682-b7b0-5dc0ae0967ac" (UID: "498c6616-13e2-4682-b7b0-5dc0ae0967ac"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:28:42.959409 kubelet[3334]: I1216 12:28:42.959291 3334 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498c6616-13e2-4682-b7b0-5dc0ae0967ac-kube-api-access-9hjx9" (OuterVolumeSpecName: "kube-api-access-9hjx9") pod "498c6616-13e2-4682-b7b0-5dc0ae0967ac" (UID: "498c6616-13e2-4682-b7b0-5dc0ae0967ac"). InnerVolumeSpecName "kube-api-access-9hjx9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:28:42.961011 kubelet[3334]: I1216 12:28:42.960924 3334 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/498c6616-13e2-4682-b7b0-5dc0ae0967ac-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "498c6616-13e2-4682-b7b0-5dc0ae0967ac" (UID: "498c6616-13e2-4682-b7b0-5dc0ae0967ac"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:28:42.966894 systemd[1]: var-lib-kubelet-pods-498c6616\x2d13e2\x2d4682\x2db7b0\x2d5dc0ae0967ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9hjx9.mount: Deactivated successfully. Dec 16 12:28:42.967140 systemd[1]: var-lib-kubelet-pods-498c6616\x2d13e2\x2d4682\x2db7b0\x2d5dc0ae0967ac-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:28:43.039702 kubelet[3334]: I1216 12:28:43.038912 3334 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/498c6616-13e2-4682-b7b0-5dc0ae0967ac-whisker-ca-bundle\") on node \"ip-172-31-24-3\" DevicePath \"\"" Dec 16 12:28:43.039702 kubelet[3334]: I1216 12:28:43.039307 3334 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9hjx9\" (UniqueName: \"kubernetes.io/projected/498c6616-13e2-4682-b7b0-5dc0ae0967ac-kube-api-access-9hjx9\") on node \"ip-172-31-24-3\" DevicePath \"\"" Dec 16 12:28:43.040236 kubelet[3334]: I1216 12:28:43.039336 3334 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/498c6616-13e2-4682-b7b0-5dc0ae0967ac-whisker-backend-key-pair\") on node \"ip-172-31-24-3\" DevicePath \"\"" Dec 16 12:28:43.231841 systemd[1]: Removed slice kubepods-besteffort-pod498c6616_13e2_4682_b7b0_5dc0ae0967ac.slice - libcontainer container kubepods-besteffort-pod498c6616_13e2_4682_b7b0_5dc0ae0967ac.slice. Dec 16 12:28:43.386147 systemd[1]: Created slice kubepods-besteffort-pod582509e9_d0bd_4a8e_a8bd_67905f25b45b.slice - libcontainer container kubepods-besteffort-pod582509e9_d0bd_4a8e_a8bd_67905f25b45b.slice. Dec 16 12:28:43.544475 kubelet[3334]: I1216 12:28:43.544357 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/582509e9-d0bd-4a8e-a8bd-67905f25b45b-whisker-backend-key-pair\") pod \"whisker-7c5c54bb9-nqsf9\" (UID: \"582509e9-d0bd-4a8e-a8bd-67905f25b45b\") " pod="calico-system/whisker-7c5c54bb9-nqsf9" Dec 16 12:28:43.544780 kubelet[3334]: I1216 12:28:43.544745 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdvzg\" (UniqueName: \"kubernetes.io/projected/582509e9-d0bd-4a8e-a8bd-67905f25b45b-kube-api-access-bdvzg\") pod \"whisker-7c5c54bb9-nqsf9\" (UID: \"582509e9-d0bd-4a8e-a8bd-67905f25b45b\") " pod="calico-system/whisker-7c5c54bb9-nqsf9" Dec 16 12:28:43.545082 kubelet[3334]: I1216 12:28:43.545002 3334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/582509e9-d0bd-4a8e-a8bd-67905f25b45b-whisker-ca-bundle\") pod \"whisker-7c5c54bb9-nqsf9\" (UID: \"582509e9-d0bd-4a8e-a8bd-67905f25b45b\") " pod="calico-system/whisker-7c5c54bb9-nqsf9" Dec 16 12:28:43.705495 containerd[2004]: time="2025-12-16T12:28:43.704912495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c5c54bb9-nqsf9,Uid:582509e9-d0bd-4a8e-a8bd-67905f25b45b,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:44.066193 (udev-worker)[4609]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:28:44.072536 systemd-networkd[1888]: cali4a5c6251854: Link UP Dec 16 12:28:44.074118 systemd-networkd[1888]: cali4a5c6251854: Gained carrier Dec 16 12:28:44.110812 containerd[2004]: 2025-12-16 12:28:43.770 [INFO][4661] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:28:44.110812 containerd[2004]: 2025-12-16 12:28:43.866 [INFO][4661] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0 whisker-7c5c54bb9- calico-system 582509e9-d0bd-4a8e-a8bd-67905f25b45b 946 0 2025-12-16 12:28:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c5c54bb9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-3 whisker-7c5c54bb9-nqsf9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4a5c6251854 [] [] }} ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Namespace="calico-system" Pod="whisker-7c5c54bb9-nqsf9" WorkloadEndpoint="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-" Dec 16 12:28:44.110812 containerd[2004]: 2025-12-16 12:28:43.866 [INFO][4661] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Namespace="calico-system" Pod="whisker-7c5c54bb9-nqsf9" WorkloadEndpoint="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" Dec 16 12:28:44.110812 containerd[2004]: 2025-12-16 12:28:43.982 [INFO][4673] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" HandleID="k8s-pod-network.52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Workload="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:43.982 [INFO][4673] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" HandleID="k8s-pod-network.52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Workload="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030c140), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-3", "pod":"whisker-7c5c54bb9-nqsf9", "timestamp":"2025-12-16 12:28:43.982042044 +0000 UTC"}, Hostname:"ip-172-31-24-3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:43.982 [INFO][4673] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:43.982 [INFO][4673] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:43.982 [INFO][4673] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-3' Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:44.001 [INFO][4673] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" host="ip-172-31-24-3" Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:44.011 [INFO][4673] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-3" Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:44.018 [INFO][4673] ipam/ipam.go 511: Trying affinity for 192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:44.022 [INFO][4673] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:44.026 [INFO][4673] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:44.111533 containerd[2004]: 2025-12-16 12:28:44.027 [INFO][4673] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" host="ip-172-31-24-3" Dec 16 12:28:44.113002 containerd[2004]: 2025-12-16 12:28:44.030 [INFO][4673] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609 Dec 16 12:28:44.113002 containerd[2004]: 2025-12-16 12:28:44.036 [INFO][4673] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" host="ip-172-31-24-3" Dec 16 12:28:44.113002 containerd[2004]: 2025-12-16 12:28:44.048 [INFO][4673] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.1/26] block=192.168.116.0/26 handle="k8s-pod-network.52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" host="ip-172-31-24-3" Dec 16 12:28:44.113002 containerd[2004]: 2025-12-16 12:28:44.048 [INFO][4673] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.1/26] handle="k8s-pod-network.52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" host="ip-172-31-24-3" Dec 16 12:28:44.113002 containerd[2004]: 2025-12-16 12:28:44.048 [INFO][4673] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:44.113002 containerd[2004]: 2025-12-16 12:28:44.048 [INFO][4673] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.1/26] IPv6=[] ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" HandleID="k8s-pod-network.52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Workload="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" Dec 16 12:28:44.113330 containerd[2004]: 2025-12-16 12:28:44.055 [INFO][4661] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Namespace="calico-system" Pod="whisker-7c5c54bb9-nqsf9" WorkloadEndpoint="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0", GenerateName:"whisker-7c5c54bb9-", Namespace:"calico-system", SelfLink:"", UID:"582509e9-d0bd-4a8e-a8bd-67905f25b45b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c5c54bb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"", Pod:"whisker-7c5c54bb9-nqsf9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a5c6251854", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:44.113330 containerd[2004]: 2025-12-16 12:28:44.055 [INFO][4661] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.1/32] ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Namespace="calico-system" Pod="whisker-7c5c54bb9-nqsf9" WorkloadEndpoint="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" Dec 16 12:28:44.113532 containerd[2004]: 2025-12-16 12:28:44.055 [INFO][4661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a5c6251854 ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Namespace="calico-system" Pod="whisker-7c5c54bb9-nqsf9" WorkloadEndpoint="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" Dec 16 12:28:44.113532 containerd[2004]: 2025-12-16 12:28:44.074 [INFO][4661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Namespace="calico-system" Pod="whisker-7c5c54bb9-nqsf9" WorkloadEndpoint="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" Dec 16 12:28:44.113626 containerd[2004]: 2025-12-16 12:28:44.076 [INFO][4661] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Namespace="calico-system" Pod="whisker-7c5c54bb9-nqsf9" WorkloadEndpoint="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0", GenerateName:"whisker-7c5c54bb9-", Namespace:"calico-system", SelfLink:"", UID:"582509e9-d0bd-4a8e-a8bd-67905f25b45b", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c5c54bb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609", Pod:"whisker-7c5c54bb9-nqsf9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4a5c6251854", MAC:"c6:00:0a:e5:86:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:44.113744 containerd[2004]: 2025-12-16 12:28:44.104 [INFO][4661] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" Namespace="calico-system" Pod="whisker-7c5c54bb9-nqsf9" WorkloadEndpoint="ip--172--31--24--3-k8s-whisker--7c5c54bb9--nqsf9-eth0" Dec 16 12:28:44.164301 containerd[2004]: time="2025-12-16T12:28:44.164240745Z" level=info msg="connecting to shim 52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609" address="unix:///run/containerd/s/0a6f2dce4612d590199a5d4e9b478307a7fac4ab35f71d145359eb22848aa43b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:44.251498 systemd[1]: Started cri-containerd-52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609.scope - libcontainer container 52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609. Dec 16 12:28:44.397756 containerd[2004]: time="2025-12-16T12:28:44.396106642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c5c54bb9-nqsf9,Uid:582509e9-d0bd-4a8e-a8bd-67905f25b45b,Namespace:calico-system,Attempt:0,} returns sandbox id \"52292354325979dc8f1e6dac7ce391446020d2da338871f0e060866e3d2e7609\"" Dec 16 12:28:44.405172 containerd[2004]: time="2025-12-16T12:28:44.404803426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:28:44.759485 containerd[2004]: time="2025-12-16T12:28:44.759362628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:44.763000 containerd[2004]: time="2025-12-16T12:28:44.762769548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:28:44.763000 containerd[2004]: time="2025-12-16T12:28:44.762915756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:28:44.763210 kubelet[3334]: E1216 12:28:44.763173 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:28:44.763683 kubelet[3334]: E1216 12:28:44.763235 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:28:44.765687 kubelet[3334]: E1216 12:28:44.763402 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c5c54bb9-nqsf9_calico-system(582509e9-d0bd-4a8e-a8bd-67905f25b45b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:44.771138 containerd[2004]: time="2025-12-16T12:28:44.770706156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:28:44.849048 kubelet[3334]: I1216 12:28:44.848154 3334 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498c6616-13e2-4682-b7b0-5dc0ae0967ac" path="/var/lib/kubelet/pods/498c6616-13e2-4682-b7b0-5dc0ae0967ac/volumes" Dec 16 12:28:45.054060 containerd[2004]: time="2025-12-16T12:28:45.053346105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:45.055873 containerd[2004]: time="2025-12-16T12:28:45.055798209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:28:45.057036 containerd[2004]: time="2025-12-16T12:28:45.055832745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:28:45.057595 kubelet[3334]: E1216 12:28:45.057507 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:28:45.057743 kubelet[3334]: E1216 12:28:45.057602 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:28:45.058485 kubelet[3334]: E1216 12:28:45.058024 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c5c54bb9-nqsf9_calico-system(582509e9-d0bd-4a8e-a8bd-67905f25b45b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:45.058485 kubelet[3334]: E1216 12:28:45.058120 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:28:45.232943 kubelet[3334]: E1216 12:28:45.232867 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:28:45.368342 systemd-networkd[1888]: cali4a5c6251854: Gained IPv6LL Dec 16 12:28:45.845480 containerd[2004]: time="2025-12-16T12:28:45.845258953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c59c4c686-nm8c9,Uid:b28336ef-bcfa-4481-a4ab-447af79aaaba,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:28:45.851475 containerd[2004]: time="2025-12-16T12:28:45.851260225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c59c4c686-xwscr,Uid:4b0a11dd-5ec4-458d-86dc-437a0146fd85,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:28:46.046948 systemd[1]: Started sshd@7-172.31.24.3:22-139.178.89.65:50550.service - OpenSSH per-connection server daemon (139.178.89.65:50550). Dec 16 12:28:46.187890 (udev-worker)[4608]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:28:46.202129 systemd-networkd[1888]: vxlan.calico: Link UP Dec 16 12:28:46.202150 systemd-networkd[1888]: vxlan.calico: Gained carrier Dec 16 12:28:46.245540 kubelet[3334]: E1216 12:28:46.244698 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:28:46.323212 sshd[4897]: Accepted publickey for core from 139.178.89.65 port 50550 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:46.331033 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:46.349697 systemd-logind[1973]: New session 8 of user core. Dec 16 12:28:46.358325 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:28:46.462480 (udev-worker)[4939]: Network interface NamePolicy= disabled on kernel command line. Dec 16 12:28:46.466195 systemd-networkd[1888]: cali7a4bbd32741: Link UP Dec 16 12:28:46.472714 systemd-networkd[1888]: cali7a4bbd32741: Gained carrier Dec 16 12:28:46.534105 containerd[2004]: 2025-12-16 12:28:46.068 [INFO][4872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0 calico-apiserver-7c59c4c686- calico-apiserver b28336ef-bcfa-4481-a4ab-447af79aaaba 875 0 2025-12-16 12:28:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c59c4c686 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-3 calico-apiserver-7c59c4c686-nm8c9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a4bbd32741 [] [] }} ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-nm8c9" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-" Dec 16 12:28:46.534105 containerd[2004]: 2025-12-16 12:28:46.068 [INFO][4872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-nm8c9" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" Dec 16 12:28:46.534105 containerd[2004]: 2025-12-16 12:28:46.305 [INFO][4904] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" HandleID="k8s-pod-network.93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Workload="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" Dec 16 12:28:46.534422 containerd[2004]: 2025-12-16 12:28:46.306 [INFO][4904] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" HandleID="k8s-pod-network.93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Workload="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000378800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-3", "pod":"calico-apiserver-7c59c4c686-nm8c9", "timestamp":"2025-12-16 12:28:46.305779176 +0000 UTC"}, Hostname:"ip-172-31-24-3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:46.534422 containerd[2004]: 2025-12-16 12:28:46.306 [INFO][4904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:46.534422 containerd[2004]: 2025-12-16 12:28:46.307 [INFO][4904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:46.534422 containerd[2004]: 2025-12-16 12:28:46.307 [INFO][4904] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-3' Dec 16 12:28:46.534422 containerd[2004]: 2025-12-16 12:28:46.328 [INFO][4904] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" host="ip-172-31-24-3" Dec 16 12:28:46.534422 containerd[2004]: 2025-12-16 12:28:46.342 [INFO][4904] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-3" Dec 16 12:28:46.534422 containerd[2004]: 2025-12-16 12:28:46.366 [INFO][4904] ipam/ipam.go 511: Trying affinity for 192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:46.534422 containerd[2004]: 2025-12-16 12:28:46.373 [INFO][4904] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:46.534422 containerd[2004]: 2025-12-16 12:28:46.411 [INFO][4904] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:46.534851 containerd[2004]: 2025-12-16 12:28:46.411 [INFO][4904] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" host="ip-172-31-24-3" Dec 16 12:28:46.534851 containerd[2004]: 2025-12-16 12:28:46.416 [INFO][4904] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835 Dec 16 12:28:46.534851 containerd[2004]: 2025-12-16 12:28:46.426 [INFO][4904] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" host="ip-172-31-24-3" Dec 16 12:28:46.534851 containerd[2004]: 2025-12-16 12:28:46.448 [INFO][4904] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.2/26] block=192.168.116.0/26 handle="k8s-pod-network.93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" host="ip-172-31-24-3" Dec 16 12:28:46.534851 containerd[2004]: 2025-12-16 12:28:46.448 [INFO][4904] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.2/26] handle="k8s-pod-network.93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" host="ip-172-31-24-3" Dec 16 12:28:46.534851 containerd[2004]: 2025-12-16 12:28:46.449 [INFO][4904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:46.534851 containerd[2004]: 2025-12-16 12:28:46.449 [INFO][4904] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.2/26] IPv6=[] ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" HandleID="k8s-pod-network.93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Workload="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" Dec 16 12:28:46.538535 containerd[2004]: 2025-12-16 12:28:46.457 [INFO][4872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-nm8c9" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0", GenerateName:"calico-apiserver-7c59c4c686-", Namespace:"calico-apiserver", SelfLink:"", UID:"b28336ef-bcfa-4481-a4ab-447af79aaaba", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c59c4c686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"", Pod:"calico-apiserver-7c59c4c686-nm8c9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a4bbd32741", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:46.538702 containerd[2004]: 2025-12-16 12:28:46.458 [INFO][4872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.2/32] ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-nm8c9" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" Dec 16 12:28:46.538702 containerd[2004]: 2025-12-16 12:28:46.458 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a4bbd32741 ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-nm8c9" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" Dec 16 12:28:46.538702 containerd[2004]: 2025-12-16 12:28:46.476 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-nm8c9" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" Dec 16 12:28:46.538831 containerd[2004]: 2025-12-16 12:28:46.483 [INFO][4872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-nm8c9" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0", GenerateName:"calico-apiserver-7c59c4c686-", Namespace:"calico-apiserver", SelfLink:"", UID:"b28336ef-bcfa-4481-a4ab-447af79aaaba", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c59c4c686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835", Pod:"calico-apiserver-7c59c4c686-nm8c9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a4bbd32741", MAC:"4e:cc:7e:47:d2:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:46.538954 containerd[2004]: 2025-12-16 12:28:46.523 [INFO][4872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-nm8c9" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--nm8c9-eth0" Dec 16 12:28:46.654551 containerd[2004]: time="2025-12-16T12:28:46.654440245Z" level=info msg="connecting to shim 93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835" address="unix:///run/containerd/s/aa79c70e36864ff166f708aec5eb9a47a9e8658de114be0f7e0ce2d794dddf34" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:46.667056 systemd-networkd[1888]: cali398ccc1133e: Link UP Dec 16 12:28:46.674065 systemd-networkd[1888]: cali398ccc1133e: Gained carrier Dec 16 12:28:46.756289 containerd[2004]: 2025-12-16 12:28:46.068 [INFO][4874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0 calico-apiserver-7c59c4c686- calico-apiserver 4b0a11dd-5ec4-458d-86dc-437a0146fd85 872 0 2025-12-16 12:28:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c59c4c686 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-3 calico-apiserver-7c59c4c686-xwscr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali398ccc1133e [] [] }} ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-xwscr" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-" Dec 16 12:28:46.756289 containerd[2004]: 2025-12-16 12:28:46.069 [INFO][4874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-xwscr" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" Dec 16 12:28:46.756289 containerd[2004]: 2025-12-16 12:28:46.309 [INFO][4901] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" HandleID="k8s-pod-network.c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Workload="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" Dec 16 12:28:46.756597 containerd[2004]: 2025-12-16 12:28:46.310 [INFO][4901] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" HandleID="k8s-pod-network.c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Workload="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-3", "pod":"calico-apiserver-7c59c4c686-xwscr", "timestamp":"2025-12-16 12:28:46.309849636 +0000 UTC"}, Hostname:"ip-172-31-24-3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:46.756597 containerd[2004]: 2025-12-16 12:28:46.310 [INFO][4901] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:46.756597 containerd[2004]: 2025-12-16 12:28:46.449 [INFO][4901] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:46.756597 containerd[2004]: 2025-12-16 12:28:46.450 [INFO][4901] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-3' Dec 16 12:28:46.756597 containerd[2004]: 2025-12-16 12:28:46.496 [INFO][4901] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" host="ip-172-31-24-3" Dec 16 12:28:46.756597 containerd[2004]: 2025-12-16 12:28:46.511 [INFO][4901] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-3" Dec 16 12:28:46.756597 containerd[2004]: 2025-12-16 12:28:46.536 [INFO][4901] ipam/ipam.go 511: Trying affinity for 192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:46.756597 containerd[2004]: 2025-12-16 12:28:46.551 [INFO][4901] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:46.756597 containerd[2004]: 2025-12-16 12:28:46.557 [INFO][4901] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:46.757070 containerd[2004]: 2025-12-16 12:28:46.557 [INFO][4901] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" host="ip-172-31-24-3" Dec 16 12:28:46.757070 containerd[2004]: 2025-12-16 12:28:46.570 [INFO][4901] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3 Dec 16 12:28:46.757070 containerd[2004]: 2025-12-16 12:28:46.598 [INFO][4901] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" host="ip-172-31-24-3" Dec 16 12:28:46.757070 containerd[2004]: 2025-12-16 12:28:46.634 [INFO][4901] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.3/26] block=192.168.116.0/26 handle="k8s-pod-network.c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" host="ip-172-31-24-3" Dec 16 12:28:46.757070 containerd[2004]: 2025-12-16 12:28:46.634 [INFO][4901] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.3/26] handle="k8s-pod-network.c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" host="ip-172-31-24-3" Dec 16 12:28:46.757070 containerd[2004]: 2025-12-16 12:28:46.634 [INFO][4901] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:46.757070 containerd[2004]: 2025-12-16 12:28:46.634 [INFO][4901] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.3/26] IPv6=[] ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" HandleID="k8s-pod-network.c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Workload="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" Dec 16 12:28:46.757418 containerd[2004]: 2025-12-16 12:28:46.652 [INFO][4874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-xwscr" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0", GenerateName:"calico-apiserver-7c59c4c686-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b0a11dd-5ec4-458d-86dc-437a0146fd85", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c59c4c686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"", Pod:"calico-apiserver-7c59c4c686-xwscr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali398ccc1133e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:46.757566 containerd[2004]: 2025-12-16 12:28:46.656 [INFO][4874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.3/32] ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-xwscr" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" Dec 16 12:28:46.757566 containerd[2004]: 2025-12-16 12:28:46.656 [INFO][4874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali398ccc1133e ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-xwscr" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" Dec 16 12:28:46.757566 containerd[2004]: 2025-12-16 12:28:46.677 [INFO][4874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-xwscr" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" Dec 16 12:28:46.757693 containerd[2004]: 2025-12-16 12:28:46.682 [INFO][4874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-xwscr" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0", GenerateName:"calico-apiserver-7c59c4c686-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b0a11dd-5ec4-458d-86dc-437a0146fd85", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c59c4c686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3", Pod:"calico-apiserver-7c59c4c686-xwscr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali398ccc1133e", MAC:"36:53:b6:af:2e:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:46.757820 containerd[2004]: 2025-12-16 12:28:46.743 [INFO][4874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" Namespace="calico-apiserver" Pod="calico-apiserver-7c59c4c686-xwscr" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--apiserver--7c59c4c686--xwscr-eth0" Dec 16 12:28:46.812956 systemd[1]: Started cri-containerd-93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835.scope - libcontainer container 93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835. Dec 16 12:28:46.866082 containerd[2004]: time="2025-12-16T12:28:46.866027450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w6lcv,Uid:62f184b6-8041-4d07-8a90-e198a52ad38e,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:46.882946 sshd[4937]: Connection closed by 139.178.89.65 port 50550 Dec 16 12:28:46.886253 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:46.891098 containerd[2004]: time="2025-12-16T12:28:46.890143058Z" level=info msg="connecting to shim c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3" address="unix:///run/containerd/s/31f2ff36c314cce927ec98c7e215d289e61808f3df14209c1092ee3317a7b436" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:46.904760 systemd[1]: sshd@7-172.31.24.3:22-139.178.89.65:50550.service: Deactivated successfully. Dec 16 12:28:46.918764 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:28:46.931379 systemd-logind[1973]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:28:46.941266 systemd-logind[1973]: Removed session 8. Dec 16 12:28:47.165339 systemd[1]: Started cri-containerd-c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3.scope - libcontainer container c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3. Dec 16 12:28:47.352187 systemd-networkd[1888]: vxlan.calico: Gained IPv6LL Dec 16 12:28:47.468941 containerd[2004]: time="2025-12-16T12:28:47.468727417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c59c4c686-nm8c9,Uid:b28336ef-bcfa-4481-a4ab-447af79aaaba,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"93fdf50f612e12a5825d1f5a42e0b9eb9139e83d7b049f32ccdfbf651e239835\"" Dec 16 12:28:47.478412 containerd[2004]: time="2025-12-16T12:28:47.478163941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:28:47.502630 systemd-networkd[1888]: cali4a503d237ab: Link UP Dec 16 12:28:47.505949 systemd-networkd[1888]: cali4a503d237ab: Gained carrier Dec 16 12:28:47.551541 containerd[2004]: 2025-12-16 12:28:47.215 [INFO][5007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0 coredns-66bc5c9577- kube-system 62f184b6-8041-4d07-8a90-e198a52ad38e 873 0 2025-12-16 12:27:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-3 coredns-66bc5c9577-w6lcv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4a503d237ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Namespace="kube-system" Pod="coredns-66bc5c9577-w6lcv" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-" Dec 16 12:28:47.551541 containerd[2004]: 2025-12-16 12:28:47.215 [INFO][5007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Namespace="kube-system" Pod="coredns-66bc5c9577-w6lcv" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" Dec 16 12:28:47.551541 containerd[2004]: 2025-12-16 12:28:47.313 [INFO][5055] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" HandleID="k8s-pod-network.9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Workload="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.314 [INFO][5055] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" HandleID="k8s-pod-network.9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Workload="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035e190), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-3", "pod":"coredns-66bc5c9577-w6lcv", "timestamp":"2025-12-16 12:28:47.313078249 +0000 UTC"}, Hostname:"ip-172-31-24-3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.314 [INFO][5055] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.315 [INFO][5055] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.315 [INFO][5055] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-3' Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.344 [INFO][5055] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" host="ip-172-31-24-3" Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.367 [INFO][5055] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-3" Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.406 [INFO][5055] ipam/ipam.go 511: Trying affinity for 192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.414 [INFO][5055] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.423 [INFO][5055] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:47.551871 containerd[2004]: 2025-12-16 12:28:47.424 [INFO][5055] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" host="ip-172-31-24-3" Dec 16 12:28:47.552925 containerd[2004]: 2025-12-16 12:28:47.431 [INFO][5055] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af Dec 16 12:28:47.552925 containerd[2004]: 2025-12-16 12:28:47.446 [INFO][5055] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" host="ip-172-31-24-3" Dec 16 12:28:47.552925 containerd[2004]: 2025-12-16 12:28:47.476 [INFO][5055] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.4/26] block=192.168.116.0/26 handle="k8s-pod-network.9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" host="ip-172-31-24-3" Dec 16 12:28:47.552925 containerd[2004]: 2025-12-16 12:28:47.477 [INFO][5055] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.4/26] handle="k8s-pod-network.9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" host="ip-172-31-24-3" Dec 16 12:28:47.552925 containerd[2004]: 2025-12-16 12:28:47.477 [INFO][5055] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:47.552925 containerd[2004]: 2025-12-16 12:28:47.477 [INFO][5055] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.4/26] IPv6=[] ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" HandleID="k8s-pod-network.9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Workload="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" Dec 16 12:28:47.554744 containerd[2004]: 2025-12-16 12:28:47.490 [INFO][5007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Namespace="kube-system" Pod="coredns-66bc5c9577-w6lcv" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"62f184b6-8041-4d07-8a90-e198a52ad38e", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"", Pod:"coredns-66bc5c9577-w6lcv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a503d237ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:47.554744 containerd[2004]: 2025-12-16 12:28:47.491 [INFO][5007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.4/32] ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Namespace="kube-system" Pod="coredns-66bc5c9577-w6lcv" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" Dec 16 12:28:47.554744 containerd[2004]: 2025-12-16 12:28:47.491 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a503d237ab ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Namespace="kube-system" Pod="coredns-66bc5c9577-w6lcv" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" Dec 16 12:28:47.554744 containerd[2004]: 2025-12-16 12:28:47.502 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Namespace="kube-system" Pod="coredns-66bc5c9577-w6lcv" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" Dec 16 12:28:47.554744 containerd[2004]: 2025-12-16 12:28:47.508 [INFO][5007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Namespace="kube-system" Pod="coredns-66bc5c9577-w6lcv" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"62f184b6-8041-4d07-8a90-e198a52ad38e", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af", Pod:"coredns-66bc5c9577-w6lcv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a503d237ab", MAC:"b6:57:3e:79:f4:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:47.554744 containerd[2004]: 2025-12-16 12:28:47.546 [INFO][5007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" Namespace="kube-system" Pod="coredns-66bc5c9577-w6lcv" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--w6lcv-eth0" Dec 16 12:28:47.641942 containerd[2004]: time="2025-12-16T12:28:47.641868566Z" level=info msg="connecting to shim 9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af" address="unix:///run/containerd/s/be4998b3d0a38fe8d625a859c86ae64e830b541eeb62d28fad89d9d10f99c053" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:47.673337 systemd-networkd[1888]: cali7a4bbd32741: Gained IPv6LL Dec 16 12:28:47.765246 containerd[2004]: time="2025-12-16T12:28:47.763932255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:47.769121 systemd[1]: Started cri-containerd-9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af.scope - libcontainer container 9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af. Dec 16 12:28:47.771291 containerd[2004]: time="2025-12-16T12:28:47.770956635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:28:47.771933 containerd[2004]: time="2025-12-16T12:28:47.771880959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:47.774308 kubelet[3334]: E1216 12:28:47.774222 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:47.774308 kubelet[3334]: E1216 12:28:47.774304 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:47.775180 kubelet[3334]: E1216 12:28:47.774425 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c59c4c686-nm8c9_calico-apiserver(b28336ef-bcfa-4481-a4ab-447af79aaaba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:47.775180 kubelet[3334]: E1216 12:28:47.774494 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:28:47.815021 containerd[2004]: time="2025-12-16T12:28:47.814362951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c59c4c686-xwscr,Uid:4b0a11dd-5ec4-458d-86dc-437a0146fd85,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c39abff0d63b79032718e0d2aabea5a41959fa5857e7f42d9820e159271366f3\"" Dec 16 12:28:47.823625 containerd[2004]: time="2025-12-16T12:28:47.823273875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:28:47.851705 containerd[2004]: time="2025-12-16T12:28:47.851285127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qps4l,Uid:0821a17d-3c03-4228-a061-1c97b86f544e,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:47.868329 containerd[2004]: time="2025-12-16T12:28:47.868086987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrbsz,Uid:8b41ae06-ee3e-4832-a16f-282cefaf725a,Namespace:kube-system,Attempt:0,}" Dec 16 12:28:47.871399 containerd[2004]: time="2025-12-16T12:28:47.870845331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57cf7db4b7-6r27k,Uid:c54f17de-062d-4ea0-b0e3-144077363c3e,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:47.929350 systemd-networkd[1888]: cali398ccc1133e: Gained IPv6LL Dec 16 12:28:48.114423 containerd[2004]: time="2025-12-16T12:28:48.114270505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:48.119991 containerd[2004]: time="2025-12-16T12:28:48.118777861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w6lcv,Uid:62f184b6-8041-4d07-8a90-e198a52ad38e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af\"" Dec 16 12:28:48.127805 containerd[2004]: time="2025-12-16T12:28:48.126647293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:28:48.128172 containerd[2004]: time="2025-12-16T12:28:48.128074021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:48.128362 kubelet[3334]: E1216 12:28:48.128300 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:48.128455 kubelet[3334]: E1216 12:28:48.128370 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:28:48.128509 kubelet[3334]: E1216 12:28:48.128479 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c59c4c686-xwscr_calico-apiserver(4b0a11dd-5ec4-458d-86dc-437a0146fd85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:48.128568 kubelet[3334]: E1216 12:28:48.128537 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:28:48.145576 containerd[2004]: time="2025-12-16T12:28:48.145200673Z" level=info msg="CreateContainer within sandbox \"9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:28:48.208410 containerd[2004]: time="2025-12-16T12:28:48.208296289Z" level=info msg="Container 082b8825138a37a67774e05105227c31a090d156d64ee307840856cc649f3db1: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:48.238994 containerd[2004]: time="2025-12-16T12:28:48.238841425Z" level=info msg="CreateContainer within sandbox \"9dd20f2fd13c8158c34fa33c0c5772028b2a13185ecdc9a7fdc7ed08695800af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"082b8825138a37a67774e05105227c31a090d156d64ee307840856cc649f3db1\"" Dec 16 12:28:48.248703 containerd[2004]: time="2025-12-16T12:28:48.248604205Z" level=info msg="StartContainer for \"082b8825138a37a67774e05105227c31a090d156d64ee307840856cc649f3db1\"" Dec 16 12:28:48.260756 containerd[2004]: time="2025-12-16T12:28:48.260489497Z" level=info msg="connecting to shim 082b8825138a37a67774e05105227c31a090d156d64ee307840856cc649f3db1" address="unix:///run/containerd/s/be4998b3d0a38fe8d625a859c86ae64e830b541eeb62d28fad89d9d10f99c053" protocol=ttrpc version=3 Dec 16 12:28:48.293021 kubelet[3334]: E1216 12:28:48.292881 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:28:48.342631 kubelet[3334]: E1216 12:28:48.342548 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:28:48.457392 systemd[1]: Started cri-containerd-082b8825138a37a67774e05105227c31a090d156d64ee307840856cc649f3db1.scope - libcontainer container 082b8825138a37a67774e05105227c31a090d156d64ee307840856cc649f3db1. Dec 16 12:28:48.639396 systemd-networkd[1888]: cali4b4273a74de: Link UP Dec 16 12:28:48.642743 systemd-networkd[1888]: cali4b4273a74de: Gained carrier Dec 16 12:28:48.670045 containerd[2004]: time="2025-12-16T12:28:48.669055119Z" level=info msg="StartContainer for \"082b8825138a37a67774e05105227c31a090d156d64ee307840856cc649f3db1\" returns successfully" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.108 [INFO][5125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0 calico-kube-controllers-57cf7db4b7- calico-system c54f17de-062d-4ea0-b0e3-144077363c3e 871 0 2025-12-16 12:28:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57cf7db4b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-3 calico-kube-controllers-57cf7db4b7-6r27k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4b4273a74de [] [] }} ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Namespace="calico-system" Pod="calico-kube-controllers-57cf7db4b7-6r27k" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.109 [INFO][5125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Namespace="calico-system" Pod="calico-kube-controllers-57cf7db4b7-6r27k" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.282 [INFO][5176] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" HandleID="k8s-pod-network.869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Workload="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.290 [INFO][5176] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" HandleID="k8s-pod-network.869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Workload="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c330), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-3", "pod":"calico-kube-controllers-57cf7db4b7-6r27k", "timestamp":"2025-12-16 12:28:48.282301633 +0000 UTC"}, Hostname:"ip-172-31-24-3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.294 [INFO][5176] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.304 [INFO][5176] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.306 [INFO][5176] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-3' Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.397 [INFO][5176] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" host="ip-172-31-24-3" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.427 [INFO][5176] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-3" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.516 [INFO][5176] ipam/ipam.go 511: Trying affinity for 192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.522 [INFO][5176] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.530 [INFO][5176] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.531 [INFO][5176] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" host="ip-172-31-24-3" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.540 [INFO][5176] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6 Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.554 [INFO][5176] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" host="ip-172-31-24-3" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.585 [INFO][5176] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.5/26] block=192.168.116.0/26 handle="k8s-pod-network.869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" host="ip-172-31-24-3" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.585 [INFO][5176] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.5/26] handle="k8s-pod-network.869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" host="ip-172-31-24-3" Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.586 [INFO][5176] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:48.694207 containerd[2004]: 2025-12-16 12:28:48.586 [INFO][5176] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.5/26] IPv6=[] ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" HandleID="k8s-pod-network.869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Workload="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" Dec 16 12:28:48.699028 containerd[2004]: 2025-12-16 12:28:48.599 [INFO][5125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Namespace="calico-system" Pod="calico-kube-controllers-57cf7db4b7-6r27k" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0", GenerateName:"calico-kube-controllers-57cf7db4b7-", Namespace:"calico-system", SelfLink:"", UID:"c54f17de-062d-4ea0-b0e3-144077363c3e", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57cf7db4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"", Pod:"calico-kube-controllers-57cf7db4b7-6r27k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b4273a74de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:48.699028 containerd[2004]: 2025-12-16 12:28:48.600 [INFO][5125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.5/32] ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Namespace="calico-system" Pod="calico-kube-controllers-57cf7db4b7-6r27k" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" Dec 16 12:28:48.699028 containerd[2004]: 2025-12-16 12:28:48.600 [INFO][5125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b4273a74de ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Namespace="calico-system" Pod="calico-kube-controllers-57cf7db4b7-6r27k" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" Dec 16 12:28:48.699028 containerd[2004]: 2025-12-16 12:28:48.647 [INFO][5125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Namespace="calico-system" Pod="calico-kube-controllers-57cf7db4b7-6r27k" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" Dec 16 12:28:48.699028 containerd[2004]: 2025-12-16 12:28:48.650 [INFO][5125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Namespace="calico-system" Pod="calico-kube-controllers-57cf7db4b7-6r27k" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0", GenerateName:"calico-kube-controllers-57cf7db4b7-", Namespace:"calico-system", SelfLink:"", UID:"c54f17de-062d-4ea0-b0e3-144077363c3e", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57cf7db4b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6", Pod:"calico-kube-controllers-57cf7db4b7-6r27k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4b4273a74de", MAC:"3a:4f:12:27:d6:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:48.699028 containerd[2004]: 2025-12-16 12:28:48.689 [INFO][5125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" Namespace="calico-system" Pod="calico-kube-controllers-57cf7db4b7-6r27k" WorkloadEndpoint="ip--172--31--24--3-k8s-calico--kube--controllers--57cf7db4b7--6r27k-eth0" Dec 16 12:28:48.772928 containerd[2004]: time="2025-12-16T12:28:48.772753576Z" level=info msg="connecting to shim 869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6" address="unix:///run/containerd/s/5d68cfc4d22d36dc6c4a3ad3faf1836bfcffab5abd268a465f22bc05d1e5640a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:48.823305 systemd-networkd[1888]: cali2b68dcc6c31: Link UP Dec 16 12:28:48.831922 systemd-networkd[1888]: cali2b68dcc6c31: Gained carrier Dec 16 12:28:48.894728 systemd[1]: Started cri-containerd-869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6.scope - libcontainer container 869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6. Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.226 [INFO][5131] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0 csi-node-driver- calico-system 0821a17d-3c03-4228-a061-1c97b86f544e 775 0 2025-12-16 12:28:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-3 csi-node-driver-qps4l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2b68dcc6c31 [] [] }} ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Namespace="calico-system" Pod="csi-node-driver-qps4l" WorkloadEndpoint="ip--172--31--24--3-k8s-csi--node--driver--qps4l-" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.227 [INFO][5131] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Namespace="calico-system" Pod="csi-node-driver-qps4l" WorkloadEndpoint="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.515 [INFO][5190] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" HandleID="k8s-pod-network.dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Workload="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.517 [INFO][5190] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" HandleID="k8s-pod-network.dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Workload="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-3", "pod":"csi-node-driver-qps4l", "timestamp":"2025-12-16 12:28:48.515093547 +0000 UTC"}, Hostname:"ip-172-31-24-3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.518 [INFO][5190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.585 [INFO][5190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.586 [INFO][5190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-3' Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.640 [INFO][5190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" host="ip-172-31-24-3" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.661 [INFO][5190] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-3" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.693 [INFO][5190] ipam/ipam.go 511: Trying affinity for 192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.708 [INFO][5190] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.718 [INFO][5190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.718 [INFO][5190] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" host="ip-172-31-24-3" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.725 [INFO][5190] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3 Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.748 [INFO][5190] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" host="ip-172-31-24-3" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.781 [INFO][5190] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.6/26] block=192.168.116.0/26 handle="k8s-pod-network.dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" host="ip-172-31-24-3" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.781 [INFO][5190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.6/26] handle="k8s-pod-network.dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" host="ip-172-31-24-3" Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.781 [INFO][5190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:48.930464 containerd[2004]: 2025-12-16 12:28:48.781 [INFO][5190] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.6/26] IPv6=[] ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" HandleID="k8s-pod-network.dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Workload="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" Dec 16 12:28:48.935455 containerd[2004]: 2025-12-16 12:28:48.795 [INFO][5131] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Namespace="calico-system" Pod="csi-node-driver-qps4l" WorkloadEndpoint="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0821a17d-3c03-4228-a061-1c97b86f544e", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"", Pod:"csi-node-driver-qps4l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b68dcc6c31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:48.935455 containerd[2004]: 2025-12-16 12:28:48.795 [INFO][5131] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.6/32] ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Namespace="calico-system" Pod="csi-node-driver-qps4l" WorkloadEndpoint="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" Dec 16 12:28:48.935455 containerd[2004]: 2025-12-16 12:28:48.795 [INFO][5131] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b68dcc6c31 ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Namespace="calico-system" Pod="csi-node-driver-qps4l" WorkloadEndpoint="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" Dec 16 12:28:48.935455 containerd[2004]: 2025-12-16 12:28:48.838 [INFO][5131] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Namespace="calico-system" Pod="csi-node-driver-qps4l" WorkloadEndpoint="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" Dec 16 12:28:48.935455 containerd[2004]: 2025-12-16 12:28:48.844 [INFO][5131] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Namespace="calico-system" Pod="csi-node-driver-qps4l" WorkloadEndpoint="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0821a17d-3c03-4228-a061-1c97b86f544e", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3", Pod:"csi-node-driver-qps4l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2b68dcc6c31", MAC:"16:a2:01:b9:4e:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:48.935455 containerd[2004]: 2025-12-16 12:28:48.890 [INFO][5131] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" Namespace="calico-system" Pod="csi-node-driver-qps4l" WorkloadEndpoint="ip--172--31--24--3-k8s-csi--node--driver--qps4l-eth0" Dec 16 12:28:49.079771 containerd[2004]: time="2025-12-16T12:28:49.078573637Z" level=info msg="connecting to shim dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3" address="unix:///run/containerd/s/5184cc1313433df5f1671de0cc04798e78864a2fbbe0e9b788ede56cdf72d6f1" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:49.156223 systemd-networkd[1888]: cali01918199f37: Link UP Dec 16 12:28:49.163209 systemd-networkd[1888]: cali01918199f37: Gained carrier Dec 16 12:28:49.207690 systemd[1]: Started cri-containerd-dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3.scope - libcontainer container dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3. Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.273 [INFO][5135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0 coredns-66bc5c9577- kube-system 8b41ae06-ee3e-4832-a16f-282cefaf725a 874 0 2025-12-16 12:27:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-3 coredns-66bc5c9577-mrbsz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali01918199f37 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Namespace="kube-system" Pod="coredns-66bc5c9577-mrbsz" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.277 [INFO][5135] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Namespace="kube-system" Pod="coredns-66bc5c9577-mrbsz" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.571 [INFO][5201] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" HandleID="k8s-pod-network.a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Workload="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.573 [INFO][5201] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" HandleID="k8s-pod-network.a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Workload="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa7c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-3", "pod":"coredns-66bc5c9577-mrbsz", "timestamp":"2025-12-16 12:28:48.571948575 +0000 UTC"}, Hostname:"ip-172-31-24-3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.573 [INFO][5201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.783 [INFO][5201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.784 [INFO][5201] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-3' Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.841 [INFO][5201] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" host="ip-172-31-24-3" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.884 [INFO][5201] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-3" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:48.941 [INFO][5201] ipam/ipam.go 511: Trying affinity for 192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:49.010 [INFO][5201] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:49.055 [INFO][5201] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:49.055 [INFO][5201] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" host="ip-172-31-24-3" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:49.074 [INFO][5201] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927 Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:49.087 [INFO][5201] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" host="ip-172-31-24-3" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:49.110 [INFO][5201] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.7/26] block=192.168.116.0/26 handle="k8s-pod-network.a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" host="ip-172-31-24-3" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:49.110 [INFO][5201] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.7/26] handle="k8s-pod-network.a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" host="ip-172-31-24-3" Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:49.111 [INFO][5201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:49.218685 containerd[2004]: 2025-12-16 12:28:49.114 [INFO][5201] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.7/26] IPv6=[] ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" HandleID="k8s-pod-network.a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Workload="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" Dec 16 12:28:49.222425 containerd[2004]: 2025-12-16 12:28:49.132 [INFO][5135] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Namespace="kube-system" Pod="coredns-66bc5c9577-mrbsz" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8b41ae06-ee3e-4832-a16f-282cefaf725a", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"", Pod:"coredns-66bc5c9577-mrbsz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01918199f37", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:49.222425 containerd[2004]: 2025-12-16 12:28:49.135 [INFO][5135] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.7/32] ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Namespace="kube-system" Pod="coredns-66bc5c9577-mrbsz" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" Dec 16 12:28:49.222425 containerd[2004]: 2025-12-16 12:28:49.135 [INFO][5135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01918199f37 ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Namespace="kube-system" Pod="coredns-66bc5c9577-mrbsz" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" Dec 16 12:28:49.222425 containerd[2004]: 2025-12-16 12:28:49.158 [INFO][5135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Namespace="kube-system" Pod="coredns-66bc5c9577-mrbsz" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" Dec 16 12:28:49.222425 containerd[2004]: 2025-12-16 12:28:49.161 [INFO][5135] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Namespace="kube-system" Pod="coredns-66bc5c9577-mrbsz" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"8b41ae06-ee3e-4832-a16f-282cefaf725a", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927", Pod:"coredns-66bc5c9577-mrbsz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali01918199f37", MAC:"9e:0f:bc:2b:03:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:49.222425 containerd[2004]: 2025-12-16 12:28:49.202 [INFO][5135] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" Namespace="kube-system" Pod="coredns-66bc5c9577-mrbsz" WorkloadEndpoint="ip--172--31--24--3-k8s-coredns--66bc5c9577--mrbsz-eth0" Dec 16 12:28:49.272324 systemd-networkd[1888]: cali4a503d237ab: Gained IPv6LL Dec 16 12:28:49.311094 containerd[2004]: time="2025-12-16T12:28:49.310953710Z" level=info msg="connecting to shim a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927" address="unix:///run/containerd/s/9f8cb6540c83f0349ba3c31d4411432817ab500451f13b57237147c585df754b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:49.391815 kubelet[3334]: E1216 12:28:49.391174 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:28:49.392679 kubelet[3334]: E1216 12:28:49.392511 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:28:49.448646 containerd[2004]: time="2025-12-16T12:28:49.448025871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57cf7db4b7-6r27k,Uid:c54f17de-062d-4ea0-b0e3-144077363c3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"869820da9c2a5312a112227e49cd31d1d180445d5d71f8f56f9fb566fbb950c6\"" Dec 16 12:28:49.461862 containerd[2004]: time="2025-12-16T12:28:49.461737899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:28:49.510429 systemd[1]: Started cri-containerd-a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927.scope - libcontainer container a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927. Dec 16 12:28:49.515448 kubelet[3334]: I1216 12:28:49.514319 3334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w6lcv" podStartSLOduration=56.514291935 podStartE2EDuration="56.514291935s" podCreationTimestamp="2025-12-16 12:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:28:49.508565031 +0000 UTC m=+61.071690604" watchObservedRunningTime="2025-12-16 12:28:49.514291935 +0000 UTC m=+61.077417520" Dec 16 12:28:49.620033 containerd[2004]: time="2025-12-16T12:28:49.619881628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qps4l,Uid:0821a17d-3c03-4228-a061-1c97b86f544e,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc749c81e6dc5d3d19bf67e9c39bf8b0bc7a68f7623154784b6f8894a59381e3\"" Dec 16 12:28:49.704910 containerd[2004]: time="2025-12-16T12:28:49.704846548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mrbsz,Uid:8b41ae06-ee3e-4832-a16f-282cefaf725a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927\"" Dec 16 12:28:49.716398 containerd[2004]: time="2025-12-16T12:28:49.716349004Z" level=info msg="CreateContainer within sandbox \"a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:28:49.735065 containerd[2004]: time="2025-12-16T12:28:49.734253989Z" level=info msg="Container 074dfba9a315f404b1e8b1d5e7f9450e6a58855d08f1005c42e9eb7e396bc03b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:28:49.743240 containerd[2004]: time="2025-12-16T12:28:49.743161349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:49.746758 containerd[2004]: time="2025-12-16T12:28:49.746658341Z" level=info msg="CreateContainer within sandbox \"a38829e0d65fa71aff4f6e9821f8c0f90f63e66b270445b32de878a7729db927\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"074dfba9a315f404b1e8b1d5e7f9450e6a58855d08f1005c42e9eb7e396bc03b\"" Dec 16 12:28:49.747225 containerd[2004]: time="2025-12-16T12:28:49.746997893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:28:49.747493 containerd[2004]: time="2025-12-16T12:28:49.747008549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:28:49.749239 kubelet[3334]: E1216 12:28:49.749078 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:28:49.750142 kubelet[3334]: E1216 12:28:49.749451 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:28:49.750458 kubelet[3334]: E1216 12:28:49.749671 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57cf7db4b7-6r27k_calico-system(c54f17de-062d-4ea0-b0e3-144077363c3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:49.753004 containerd[2004]: time="2025-12-16T12:28:49.750861869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:28:49.753004 containerd[2004]: time="2025-12-16T12:28:49.750884105Z" level=info msg="StartContainer for \"074dfba9a315f404b1e8b1d5e7f9450e6a58855d08f1005c42e9eb7e396bc03b\"" Dec 16 12:28:49.754447 kubelet[3334]: E1216 12:28:49.752887 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:28:49.757631 containerd[2004]: time="2025-12-16T12:28:49.757569557Z" level=info msg="connecting to shim 074dfba9a315f404b1e8b1d5e7f9450e6a58855d08f1005c42e9eb7e396bc03b" address="unix:///run/containerd/s/9f8cb6540c83f0349ba3c31d4411432817ab500451f13b57237147c585df754b" protocol=ttrpc version=3 Dec 16 12:28:49.803300 systemd[1]: Started cri-containerd-074dfba9a315f404b1e8b1d5e7f9450e6a58855d08f1005c42e9eb7e396bc03b.scope - libcontainer container 074dfba9a315f404b1e8b1d5e7f9450e6a58855d08f1005c42e9eb7e396bc03b. Dec 16 12:28:49.844511 containerd[2004]: time="2025-12-16T12:28:49.844445297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-m7qfq,Uid:59a087e1-e448-441a-b97a-fe80bf31dd45,Namespace:calico-system,Attempt:0,}" Dec 16 12:28:49.901264 containerd[2004]: time="2025-12-16T12:28:49.901197377Z" level=info msg="StartContainer for \"074dfba9a315f404b1e8b1d5e7f9450e6a58855d08f1005c42e9eb7e396bc03b\" returns successfully" Dec 16 12:28:50.041325 systemd-networkd[1888]: cali2b68dcc6c31: Gained IPv6LL Dec 16 12:28:50.059223 containerd[2004]: time="2025-12-16T12:28:50.059146922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:50.062201 containerd[2004]: time="2025-12-16T12:28:50.062120630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:28:50.062810 containerd[2004]: time="2025-12-16T12:28:50.062259794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:28:50.063867 kubelet[3334]: E1216 12:28:50.063763 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:28:50.064043 kubelet[3334]: E1216 12:28:50.063904 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:28:50.064230 kubelet[3334]: E1216 12:28:50.064087 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:50.067339 containerd[2004]: time="2025-12-16T12:28:50.067174346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:28:50.132994 systemd-networkd[1888]: caliacd11a6fc0a: Link UP Dec 16 12:28:50.137652 systemd-networkd[1888]: caliacd11a6fc0a: Gained carrier Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:49.962 [INFO][5455] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0 goldmane-7c778bb748- calico-system 59a087e1-e448-441a-b97a-fe80bf31dd45 879 0 2025-12-16 12:28:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-3 goldmane-7c778bb748-m7qfq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliacd11a6fc0a [] [] }} ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Namespace="calico-system" Pod="goldmane-7c778bb748-m7qfq" WorkloadEndpoint="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:49.962 [INFO][5455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Namespace="calico-system" Pod="goldmane-7c778bb748-m7qfq" WorkloadEndpoint="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.025 [INFO][5475] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" HandleID="k8s-pod-network.d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Workload="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.025 [INFO][5475] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" HandleID="k8s-pod-network.d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Workload="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b3b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-3", "pod":"goldmane-7c778bb748-m7qfq", "timestamp":"2025-12-16 12:28:50.025481582 +0000 UTC"}, Hostname:"ip-172-31-24-3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.025 [INFO][5475] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.026 [INFO][5475] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.026 [INFO][5475] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-3' Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.052 [INFO][5475] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" host="ip-172-31-24-3" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.060 [INFO][5475] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-3" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.076 [INFO][5475] ipam/ipam.go 511: Trying affinity for 192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.080 [INFO][5475] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.086 [INFO][5475] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.0/26 host="ip-172-31-24-3" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.086 [INFO][5475] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.0/26 handle="k8s-pod-network.d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" host="ip-172-31-24-3" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.090 [INFO][5475] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.099 [INFO][5475] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.0/26 handle="k8s-pod-network.d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" host="ip-172-31-24-3" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.116 [INFO][5475] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.8/26] block=192.168.116.0/26 handle="k8s-pod-network.d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" host="ip-172-31-24-3" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.116 [INFO][5475] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.8/26] handle="k8s-pod-network.d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" host="ip-172-31-24-3" Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.116 [INFO][5475] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:28:50.176257 containerd[2004]: 2025-12-16 12:28:50.116 [INFO][5475] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.8/26] IPv6=[] ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" HandleID="k8s-pod-network.d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Workload="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" Dec 16 12:28:50.178498 containerd[2004]: 2025-12-16 12:28:50.123 [INFO][5455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Namespace="calico-system" Pod="goldmane-7c778bb748-m7qfq" WorkloadEndpoint="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"59a087e1-e448-441a-b97a-fe80bf31dd45", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"", Pod:"goldmane-7c778bb748-m7qfq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacd11a6fc0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:50.178498 containerd[2004]: 2025-12-16 12:28:50.123 [INFO][5455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.8/32] ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Namespace="calico-system" Pod="goldmane-7c778bb748-m7qfq" WorkloadEndpoint="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" Dec 16 12:28:50.178498 containerd[2004]: 2025-12-16 12:28:50.123 [INFO][5455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacd11a6fc0a ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Namespace="calico-system" Pod="goldmane-7c778bb748-m7qfq" WorkloadEndpoint="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" Dec 16 12:28:50.178498 containerd[2004]: 2025-12-16 12:28:50.139 [INFO][5455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Namespace="calico-system" Pod="goldmane-7c778bb748-m7qfq" WorkloadEndpoint="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" Dec 16 12:28:50.178498 containerd[2004]: 2025-12-16 12:28:50.141 [INFO][5455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Namespace="calico-system" Pod="goldmane-7c778bb748-m7qfq" WorkloadEndpoint="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"59a087e1-e448-441a-b97a-fe80bf31dd45", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 28, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-3", ContainerID:"d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe", Pod:"goldmane-7c778bb748-m7qfq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliacd11a6fc0a", MAC:"d2:17:3f:f7:a7:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:28:50.178498 containerd[2004]: 2025-12-16 12:28:50.168 [INFO][5455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" Namespace="calico-system" Pod="goldmane-7c778bb748-m7qfq" WorkloadEndpoint="ip--172--31--24--3-k8s-goldmane--7c778bb748--m7qfq-eth0" Dec 16 12:28:50.231577 containerd[2004]: time="2025-12-16T12:28:50.231508695Z" level=info msg="connecting to shim d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe" address="unix:///run/containerd/s/33074babf771debc16beafa9249e91fb1f239be219a01c750b216e44ead86dd4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:28:50.284286 systemd[1]: Started cri-containerd-d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe.scope - libcontainer container d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe. Dec 16 12:28:50.296274 systemd-networkd[1888]: cali4b4273a74de: Gained IPv6LL Dec 16 12:28:50.331900 containerd[2004]: time="2025-12-16T12:28:50.331687996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:50.334137 containerd[2004]: time="2025-12-16T12:28:50.334073536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:28:50.334365 containerd[2004]: time="2025-12-16T12:28:50.334092496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:28:50.334669 kubelet[3334]: E1216 12:28:50.334600 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:28:50.335293 kubelet[3334]: E1216 12:28:50.334668 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:28:50.335293 kubelet[3334]: E1216 12:28:50.334790 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:50.335293 kubelet[3334]: E1216 12:28:50.334864 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:28:50.373827 containerd[2004]: time="2025-12-16T12:28:50.373757920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-m7qfq,Uid:59a087e1-e448-441a-b97a-fe80bf31dd45,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2a6a48e15fd69d762fe6ab8aa61f83ac4711039b9d3b23e7b9286033b96fffe\"" Dec 16 12:28:50.377785 containerd[2004]: time="2025-12-16T12:28:50.377610496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:28:50.407855 kubelet[3334]: E1216 12:28:50.407767 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:28:50.434094 kubelet[3334]: E1216 12:28:50.433586 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:28:50.586014 kubelet[3334]: I1216 12:28:50.584958 3334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mrbsz" podStartSLOduration=57.584912585 podStartE2EDuration="57.584912585s" podCreationTimestamp="2025-12-16 12:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:28:50.507233272 +0000 UTC m=+62.070358869" watchObservedRunningTime="2025-12-16 12:28:50.584912585 +0000 UTC m=+62.148038146" Dec 16 12:28:50.664043 containerd[2004]: time="2025-12-16T12:28:50.663939137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:28:50.666220 containerd[2004]: time="2025-12-16T12:28:50.666146213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:28:50.666358 containerd[2004]: time="2025-12-16T12:28:50.666274109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:28:50.666564 kubelet[3334]: E1216 12:28:50.666511 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:28:50.666653 kubelet[3334]: E1216 12:28:50.666577 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:28:50.666721 kubelet[3334]: E1216 12:28:50.666694 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-m7qfq_calico-system(59a087e1-e448-441a-b97a-fe80bf31dd45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:28:50.666775 kubelet[3334]: E1216 12:28:50.666744 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m7qfq" podUID="59a087e1-e448-441a-b97a-fe80bf31dd45" Dec 16 12:28:51.064181 systemd-networkd[1888]: cali01918199f37: Gained IPv6LL Dec 16 12:28:51.320274 systemd-networkd[1888]: caliacd11a6fc0a: Gained IPv6LL Dec 16 12:28:51.436633 kubelet[3334]: E1216 12:28:51.436553 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m7qfq" podUID="59a087e1-e448-441a-b97a-fe80bf31dd45" Dec 16 12:28:51.437313 kubelet[3334]: E1216 12:28:51.436805 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:28:51.439235 kubelet[3334]: E1216 12:28:51.439094 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:28:51.920605 systemd[1]: Started sshd@8-172.31.24.3:22-139.178.89.65:47878.service - OpenSSH per-connection server daemon (139.178.89.65:47878). Dec 16 12:28:52.133394 sshd[5546]: Accepted publickey for core from 139.178.89.65 port 47878 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:52.136294 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:52.145954 systemd-logind[1973]: New session 9 of user core. Dec 16 12:28:52.159274 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:28:52.430263 sshd[5549]: Connection closed by 139.178.89.65 port 47878 Dec 16 12:28:52.431105 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:52.441807 systemd-logind[1973]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:28:52.442522 systemd[1]: sshd@8-172.31.24.3:22-139.178.89.65:47878.service: Deactivated successfully. Dec 16 12:28:52.448029 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:28:52.452841 systemd-logind[1973]: Removed session 9. Dec 16 12:28:53.880440 ntpd[2207]: Listen normally on 6 vxlan.calico 192.168.116.0:123 Dec 16 12:28:53.880568 ntpd[2207]: Listen normally on 7 cali4a5c6251854 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 6 vxlan.calico 192.168.116.0:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 7 cali4a5c6251854 [fe80::ecee:eeff:feee:eeee%4]:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 8 vxlan.calico [fe80::646e:e4ff:febd:26fc%5]:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 9 cali7a4bbd32741 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 10 cali398ccc1133e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 11 cali4a503d237ab [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 12 cali4b4273a74de [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 13 cali2b68dcc6c31 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 14 cali01918199f37 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 12:28:53.881135 ntpd[2207]: 16 Dec 12:28:53 ntpd[2207]: Listen normally on 15 caliacd11a6fc0a [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 12:28:53.880621 ntpd[2207]: Listen normally on 8 vxlan.calico [fe80::646e:e4ff:febd:26fc%5]:123 Dec 16 12:28:53.880672 ntpd[2207]: Listen normally on 9 cali7a4bbd32741 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 16 12:28:53.880719 ntpd[2207]: Listen normally on 10 cali398ccc1133e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 16 12:28:53.880766 ntpd[2207]: Listen normally on 11 cali4a503d237ab [fe80::ecee:eeff:feee:eeee%10]:123 Dec 16 12:28:53.880818 ntpd[2207]: Listen normally on 12 cali4b4273a74de [fe80::ecee:eeff:feee:eeee%11]:123 Dec 16 12:28:53.880865 ntpd[2207]: Listen normally on 13 cali2b68dcc6c31 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 16 12:28:53.880923 ntpd[2207]: Listen normally on 14 cali01918199f37 [fe80::ecee:eeff:feee:eeee%13]:123 Dec 16 12:28:53.881016 ntpd[2207]: Listen normally on 15 caliacd11a6fc0a [fe80::ecee:eeff:feee:eeee%14]:123 Dec 16 12:28:57.473944 systemd[1]: Started sshd@9-172.31.24.3:22-139.178.89.65:47894.service - OpenSSH per-connection server daemon (139.178.89.65:47894). Dec 16 12:28:57.676014 sshd[5572]: Accepted publickey for core from 139.178.89.65 port 47894 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:57.679113 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:57.688133 systemd-logind[1973]: New session 10 of user core. Dec 16 12:28:57.694270 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:28:57.954437 sshd[5575]: Connection closed by 139.178.89.65 port 47894 Dec 16 12:28:57.955512 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:57.962850 systemd[1]: sshd@9-172.31.24.3:22-139.178.89.65:47894.service: Deactivated successfully. Dec 16 12:28:57.968680 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:28:57.971849 systemd-logind[1973]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:28:57.975495 systemd-logind[1973]: Removed session 10. Dec 16 12:28:57.993144 systemd[1]: Started sshd@10-172.31.24.3:22-139.178.89.65:47908.service - OpenSSH per-connection server daemon (139.178.89.65:47908). Dec 16 12:28:58.187821 sshd[5587]: Accepted publickey for core from 139.178.89.65 port 47908 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:58.190389 sshd-session[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:58.199838 systemd-logind[1973]: New session 11 of user core. Dec 16 12:28:58.205256 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:28:58.550641 sshd[5591]: Connection closed by 139.178.89.65 port 47908 Dec 16 12:28:58.551832 sshd-session[5587]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:58.568298 systemd[1]: sshd@10-172.31.24.3:22-139.178.89.65:47908.service: Deactivated successfully. Dec 16 12:28:58.578595 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:28:58.582225 systemd-logind[1973]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:28:58.611496 systemd[1]: Started sshd@11-172.31.24.3:22-139.178.89.65:47924.service - OpenSSH per-connection server daemon (139.178.89.65:47924). Dec 16 12:28:58.615068 systemd-logind[1973]: Removed session 11. Dec 16 12:28:58.821521 sshd[5601]: Accepted publickey for core from 139.178.89.65 port 47924 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:28:58.824313 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:58.833376 systemd-logind[1973]: New session 12 of user core. Dec 16 12:28:58.841318 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:28:59.136831 sshd[5606]: Connection closed by 139.178.89.65 port 47924 Dec 16 12:28:59.135575 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Dec 16 12:28:59.143496 systemd[1]: sshd@11-172.31.24.3:22-139.178.89.65:47924.service: Deactivated successfully. Dec 16 12:28:59.148374 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:28:59.150919 systemd-logind[1973]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:28:59.154710 systemd-logind[1973]: Removed session 12. Dec 16 12:29:00.840099 containerd[2004]: time="2025-12-16T12:29:00.839937796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:29:01.112835 containerd[2004]: time="2025-12-16T12:29:01.112619149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:01.115125 containerd[2004]: time="2025-12-16T12:29:01.115030429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:29:01.115457 containerd[2004]: time="2025-12-16T12:29:01.115180945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:29:01.115639 kubelet[3334]: E1216 12:29:01.115399 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:01.115639 kubelet[3334]: E1216 12:29:01.115465 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:01.115639 kubelet[3334]: E1216 12:29:01.115583 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c5c54bb9-nqsf9_calico-system(582509e9-d0bd-4a8e-a8bd-67905f25b45b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:01.120706 containerd[2004]: time="2025-12-16T12:29:01.120568981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:29:01.425501 containerd[2004]: time="2025-12-16T12:29:01.425221539Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:01.427610 containerd[2004]: time="2025-12-16T12:29:01.427456935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:29:01.427610 containerd[2004]: time="2025-12-16T12:29:01.427532727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:01.428296 kubelet[3334]: E1216 12:29:01.427817 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:01.428296 kubelet[3334]: E1216 12:29:01.427885 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:01.428296 kubelet[3334]: E1216 12:29:01.428058 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c5c54bb9-nqsf9_calico-system(582509e9-d0bd-4a8e-a8bd-67905f25b45b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:01.428480 kubelet[3334]: E1216 12:29:01.428134 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:29:02.840773 containerd[2004]: time="2025-12-16T12:29:02.840703650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:29:03.130310 containerd[2004]: time="2025-12-16T12:29:03.130148163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:03.132477 containerd[2004]: time="2025-12-16T12:29:03.132315015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:29:03.132477 containerd[2004]: time="2025-12-16T12:29:03.132387339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:03.132899 kubelet[3334]: E1216 12:29:03.132832 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:03.134232 kubelet[3334]: E1216 12:29:03.132906 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:03.134232 kubelet[3334]: E1216 12:29:03.133075 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57cf7db4b7-6r27k_calico-system(c54f17de-062d-4ea0-b0e3-144077363c3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:03.134232 kubelet[3334]: E1216 12:29:03.133134 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:29:03.842274 containerd[2004]: time="2025-12-16T12:29:03.841907467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:04.126259 containerd[2004]: time="2025-12-16T12:29:04.126084796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:04.128241 containerd[2004]: time="2025-12-16T12:29:04.128177728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:04.128390 containerd[2004]: time="2025-12-16T12:29:04.128306428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:04.128586 kubelet[3334]: E1216 12:29:04.128533 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:04.128660 kubelet[3334]: E1216 12:29:04.128599 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:04.128947 kubelet[3334]: E1216 12:29:04.128884 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c59c4c686-nm8c9_calico-apiserver(b28336ef-bcfa-4481-a4ab-447af79aaaba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:04.129290 kubelet[3334]: E1216 12:29:04.128953 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:29:04.129668 containerd[2004]: time="2025-12-16T12:29:04.129269044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:29:04.171057 systemd[1]: Started sshd@12-172.31.24.3:22-139.178.89.65:56798.service - OpenSSH per-connection server daemon (139.178.89.65:56798). Dec 16 12:29:04.366642 sshd[5625]: Accepted publickey for core from 139.178.89.65 port 56798 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:04.369676 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:04.381395 systemd-logind[1973]: New session 13 of user core. Dec 16 12:29:04.390872 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:29:04.405287 containerd[2004]: time="2025-12-16T12:29:04.405096041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:04.409012 containerd[2004]: time="2025-12-16T12:29:04.408849317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:29:04.410465 containerd[2004]: time="2025-12-16T12:29:04.408906749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:04.410570 kubelet[3334]: E1216 12:29:04.409711 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:04.410570 kubelet[3334]: E1216 12:29:04.409777 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:04.412696 kubelet[3334]: E1216 12:29:04.411693 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-m7qfq_calico-system(59a087e1-e448-441a-b97a-fe80bf31dd45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:04.412696 kubelet[3334]: E1216 12:29:04.411766 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m7qfq" podUID="59a087e1-e448-441a-b97a-fe80bf31dd45" Dec 16 12:29:04.414367 containerd[2004]: time="2025-12-16T12:29:04.413475785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:04.679362 containerd[2004]: time="2025-12-16T12:29:04.679152223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:04.682108 containerd[2004]: time="2025-12-16T12:29:04.681327211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:04.683041 containerd[2004]: time="2025-12-16T12:29:04.681387307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:04.683208 kubelet[3334]: E1216 12:29:04.682706 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:04.683208 kubelet[3334]: E1216 12:29:04.682764 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:04.685220 kubelet[3334]: E1216 12:29:04.684077 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c59c4c686-xwscr_calico-apiserver(4b0a11dd-5ec4-458d-86dc-437a0146fd85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:04.685220 kubelet[3334]: E1216 12:29:04.684170 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:29:04.706696 sshd[5631]: Connection closed by 139.178.89.65 port 56798 Dec 16 12:29:04.709313 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:04.719675 systemd[1]: sshd@12-172.31.24.3:22-139.178.89.65:56798.service: Deactivated successfully. Dec 16 12:29:04.726231 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:29:04.729500 systemd-logind[1973]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:29:04.734908 systemd-logind[1973]: Removed session 13. Dec 16 12:29:05.840015 containerd[2004]: time="2025-12-16T12:29:05.839022417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:29:06.105812 containerd[2004]: time="2025-12-16T12:29:06.105525162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:06.107908 containerd[2004]: time="2025-12-16T12:29:06.107730438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:29:06.107908 containerd[2004]: time="2025-12-16T12:29:06.107855034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:29:06.108182 kubelet[3334]: E1216 12:29:06.108088 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:06.108182 kubelet[3334]: E1216 12:29:06.108145 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:06.108762 kubelet[3334]: E1216 12:29:06.108257 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:06.111313 containerd[2004]: time="2025-12-16T12:29:06.110897142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:29:06.373444 containerd[2004]: time="2025-12-16T12:29:06.373305679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:06.376466 containerd[2004]: time="2025-12-16T12:29:06.376287403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:29:06.376466 containerd[2004]: time="2025-12-16T12:29:06.376421563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:29:06.377104 kubelet[3334]: E1216 12:29:06.376931 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:06.377422 kubelet[3334]: E1216 12:29:06.377292 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:06.378672 kubelet[3334]: E1216 12:29:06.378631 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:06.378945 kubelet[3334]: E1216 12:29:06.378875 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:29:09.746015 systemd[1]: Started sshd@13-172.31.24.3:22-139.178.89.65:56804.service - OpenSSH per-connection server daemon (139.178.89.65:56804). Dec 16 12:29:09.963654 sshd[5651]: Accepted publickey for core from 139.178.89.65 port 56804 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:09.969182 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:09.979686 systemd-logind[1973]: New session 14 of user core. Dec 16 12:29:09.988584 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:29:10.292287 sshd[5655]: Connection closed by 139.178.89.65 port 56804 Dec 16 12:29:10.293355 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:10.303269 systemd[1]: sshd@13-172.31.24.3:22-139.178.89.65:56804.service: Deactivated successfully. Dec 16 12:29:10.309766 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:29:10.312591 systemd-logind[1973]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:29:10.317279 systemd-logind[1973]: Removed session 14. Dec 16 12:29:12.842748 kubelet[3334]: E1216 12:29:12.840885 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:29:15.343493 systemd[1]: Started sshd@14-172.31.24.3:22-139.178.89.65:51510.service - OpenSSH per-connection server daemon (139.178.89.65:51510). Dec 16 12:29:15.569516 sshd[5695]: Accepted publickey for core from 139.178.89.65 port 51510 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:15.574139 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:15.587345 systemd-logind[1973]: New session 15 of user core. Dec 16 12:29:15.597747 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:29:15.842581 kubelet[3334]: E1216 12:29:15.842398 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:29:15.848667 sshd[5698]: Connection closed by 139.178.89.65 port 51510 Dec 16 12:29:15.850569 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:15.861868 systemd[1]: sshd@14-172.31.24.3:22-139.178.89.65:51510.service: Deactivated successfully. Dec 16 12:29:15.872350 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:29:15.877572 systemd-logind[1973]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:29:15.882483 systemd-logind[1973]: Removed session 15. Dec 16 12:29:16.841994 kubelet[3334]: E1216 12:29:16.841225 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:29:17.840155 kubelet[3334]: E1216 12:29:17.839838 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:29:18.839756 kubelet[3334]: E1216 12:29:18.839628 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m7qfq" podUID="59a087e1-e448-441a-b97a-fe80bf31dd45" Dec 16 12:29:19.840375 kubelet[3334]: E1216 12:29:19.840247 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:29:20.884567 systemd[1]: Started sshd@15-172.31.24.3:22-139.178.89.65:36988.service - OpenSSH per-connection server daemon (139.178.89.65:36988). Dec 16 12:29:21.090784 sshd[5713]: Accepted publickey for core from 139.178.89.65 port 36988 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:21.093475 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:21.102785 systemd-logind[1973]: New session 16 of user core. Dec 16 12:29:21.115245 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:29:21.379030 sshd[5716]: Connection closed by 139.178.89.65 port 36988 Dec 16 12:29:21.379893 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:21.390331 systemd[1]: sshd@15-172.31.24.3:22-139.178.89.65:36988.service: Deactivated successfully. Dec 16 12:29:21.394691 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:29:21.399190 systemd-logind[1973]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:29:21.416591 systemd-logind[1973]: Removed session 16. Dec 16 12:29:21.418380 systemd[1]: Started sshd@16-172.31.24.3:22-139.178.89.65:36996.service - OpenSSH per-connection server daemon (139.178.89.65:36996). Dec 16 12:29:21.628717 sshd[5728]: Accepted publickey for core from 139.178.89.65 port 36996 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:21.631116 sshd-session[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:21.639413 systemd-logind[1973]: New session 17 of user core. Dec 16 12:29:21.648267 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:29:22.083475 sshd[5731]: Connection closed by 139.178.89.65 port 36996 Dec 16 12:29:22.084831 sshd-session[5728]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:22.092348 systemd-logind[1973]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:29:22.092911 systemd[1]: sshd@16-172.31.24.3:22-139.178.89.65:36996.service: Deactivated successfully. Dec 16 12:29:22.097550 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:29:22.120606 systemd-logind[1973]: Removed session 17. Dec 16 12:29:22.121407 systemd[1]: Started sshd@17-172.31.24.3:22-139.178.89.65:37000.service - OpenSSH per-connection server daemon (139.178.89.65:37000). Dec 16 12:29:22.320480 sshd[5741]: Accepted publickey for core from 139.178.89.65 port 37000 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:22.324452 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:22.334117 systemd-logind[1973]: New session 18 of user core. Dec 16 12:29:22.343407 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:29:23.660790 sshd[5744]: Connection closed by 139.178.89.65 port 37000 Dec 16 12:29:23.661496 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:23.676170 systemd[1]: sshd@17-172.31.24.3:22-139.178.89.65:37000.service: Deactivated successfully. Dec 16 12:29:23.682789 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:29:23.691473 systemd-logind[1973]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:29:23.713507 systemd[1]: Started sshd@18-172.31.24.3:22-139.178.89.65:37008.service - OpenSSH per-connection server daemon (139.178.89.65:37008). Dec 16 12:29:23.718238 systemd-logind[1973]: Removed session 18. Dec 16 12:29:23.927152 sshd[5763]: Accepted publickey for core from 139.178.89.65 port 37008 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:23.929398 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:23.939132 systemd-logind[1973]: New session 19 of user core. Dec 16 12:29:23.949273 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:29:24.541497 sshd[5768]: Connection closed by 139.178.89.65 port 37008 Dec 16 12:29:24.543261 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:24.556191 systemd[1]: sshd@18-172.31.24.3:22-139.178.89.65:37008.service: Deactivated successfully. Dec 16 12:29:24.564845 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:29:24.570222 systemd-logind[1973]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:29:24.593813 systemd[1]: Started sshd@19-172.31.24.3:22-139.178.89.65:37024.service - OpenSSH per-connection server daemon (139.178.89.65:37024). Dec 16 12:29:24.598081 systemd-logind[1973]: Removed session 19. Dec 16 12:29:24.800115 sshd[5780]: Accepted publickey for core from 139.178.89.65 port 37024 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:24.803072 sshd-session[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:24.812106 systemd-logind[1973]: New session 20 of user core. Dec 16 12:29:24.820263 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:29:24.845085 containerd[2004]: time="2025-12-16T12:29:24.844460895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:29:25.090621 sshd[5785]: Connection closed by 139.178.89.65 port 37024 Dec 16 12:29:25.091316 sshd-session[5780]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:25.098479 systemd[1]: sshd@19-172.31.24.3:22-139.178.89.65:37024.service: Deactivated successfully. Dec 16 12:29:25.103671 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:29:25.108417 systemd-logind[1973]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:29:25.112270 systemd-logind[1973]: Removed session 20. Dec 16 12:29:25.113475 containerd[2004]: time="2025-12-16T12:29:25.113417028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:25.115922 containerd[2004]: time="2025-12-16T12:29:25.115771104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:29:25.116695 containerd[2004]: time="2025-12-16T12:29:25.115935936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:29:25.117647 kubelet[3334]: E1216 12:29:25.117480 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:25.117647 kubelet[3334]: E1216 12:29:25.117552 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:29:25.120755 kubelet[3334]: E1216 12:29:25.117692 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c5c54bb9-nqsf9_calico-system(582509e9-d0bd-4a8e-a8bd-67905f25b45b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:25.121007 containerd[2004]: time="2025-12-16T12:29:25.120663240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:29:25.363741 containerd[2004]: time="2025-12-16T12:29:25.363369146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:25.366378 containerd[2004]: time="2025-12-16T12:29:25.366216590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:29:25.366378 containerd[2004]: time="2025-12-16T12:29:25.366286202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:25.366863 kubelet[3334]: E1216 12:29:25.366788 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:25.368057 kubelet[3334]: E1216 12:29:25.367045 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:29:25.368057 kubelet[3334]: E1216 12:29:25.367278 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c5c54bb9-nqsf9_calico-system(582509e9-d0bd-4a8e-a8bd-67905f25b45b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:25.368057 kubelet[3334]: E1216 12:29:25.367352 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:29:28.842343 containerd[2004]: time="2025-12-16T12:29:28.842141863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:29:29.152334 containerd[2004]: time="2025-12-16T12:29:29.152259628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:29.154673 containerd[2004]: time="2025-12-16T12:29:29.154600852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:29:29.154801 containerd[2004]: time="2025-12-16T12:29:29.154733608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:29:29.155135 kubelet[3334]: E1216 12:29:29.155075 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:29.156050 kubelet[3334]: E1216 12:29:29.155697 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:29:29.156050 kubelet[3334]: E1216 12:29:29.155868 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-57cf7db4b7-6r27k_calico-system(c54f17de-062d-4ea0-b0e3-144077363c3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:29.156050 kubelet[3334]: E1216 12:29:29.155928 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:29:30.130460 systemd[1]: Started sshd@20-172.31.24.3:22-139.178.89.65:37026.service - OpenSSH per-connection server daemon (139.178.89.65:37026). Dec 16 12:29:30.339929 sshd[5803]: Accepted publickey for core from 139.178.89.65 port 37026 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:30.342691 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:30.353030 systemd-logind[1973]: New session 21 of user core. Dec 16 12:29:30.361384 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:29:30.638401 sshd[5808]: Connection closed by 139.178.89.65 port 37026 Dec 16 12:29:30.639493 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:30.646781 systemd-logind[1973]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:29:30.648102 systemd[1]: sshd@20-172.31.24.3:22-139.178.89.65:37026.service: Deactivated successfully. Dec 16 12:29:30.653071 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:29:30.659939 systemd-logind[1973]: Removed session 21. Dec 16 12:29:30.845649 containerd[2004]: time="2025-12-16T12:29:30.845309829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:29:31.208613 containerd[2004]: time="2025-12-16T12:29:31.208516387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:31.210995 containerd[2004]: time="2025-12-16T12:29:31.210839863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:29:31.211210 containerd[2004]: time="2025-12-16T12:29:31.211176259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:31.212084 kubelet[3334]: E1216 12:29:31.211435 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:31.212084 kubelet[3334]: E1216 12:29:31.211499 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:29:31.212084 kubelet[3334]: E1216 12:29:31.211611 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-m7qfq_calico-system(59a087e1-e448-441a-b97a-fe80bf31dd45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:31.212084 kubelet[3334]: E1216 12:29:31.211664 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m7qfq" podUID="59a087e1-e448-441a-b97a-fe80bf31dd45" Dec 16 12:29:31.843861 containerd[2004]: time="2025-12-16T12:29:31.842884102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:32.144688 containerd[2004]: time="2025-12-16T12:29:32.144551791Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:32.147484 containerd[2004]: time="2025-12-16T12:29:32.147328711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:32.147484 containerd[2004]: time="2025-12-16T12:29:32.147399091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:32.147786 kubelet[3334]: E1216 12:29:32.147686 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:32.147874 kubelet[3334]: E1216 12:29:32.147783 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:32.148353 kubelet[3334]: E1216 12:29:32.148310 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c59c4c686-nm8c9_calico-apiserver(b28336ef-bcfa-4481-a4ab-447af79aaaba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:32.148470 kubelet[3334]: E1216 12:29:32.148396 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:29:32.149622 containerd[2004]: time="2025-12-16T12:29:32.149554483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:29:32.445304 containerd[2004]: time="2025-12-16T12:29:32.445195449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:32.447567 containerd[2004]: time="2025-12-16T12:29:32.447501609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:29:32.447681 containerd[2004]: time="2025-12-16T12:29:32.447629193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:29:32.447873 kubelet[3334]: E1216 12:29:32.447831 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:32.449088 kubelet[3334]: E1216 12:29:32.447889 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:29:32.449088 kubelet[3334]: E1216 12:29:32.448101 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:32.450186 containerd[2004]: time="2025-12-16T12:29:32.450122073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:29:32.718752 containerd[2004]: time="2025-12-16T12:29:32.718607278Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:32.721778 containerd[2004]: time="2025-12-16T12:29:32.721624342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:29:32.721891 containerd[2004]: time="2025-12-16T12:29:32.721654258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:29:32.722321 kubelet[3334]: E1216 12:29:32.722264 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:32.722527 kubelet[3334]: E1216 12:29:32.722488 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:29:32.722769 kubelet[3334]: E1216 12:29:32.722729 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:32.724631 kubelet[3334]: E1216 12:29:32.724502 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:29:34.842185 containerd[2004]: time="2025-12-16T12:29:34.841756381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:29:35.111693 containerd[2004]: time="2025-12-16T12:29:35.111496666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:29:35.113904 containerd[2004]: time="2025-12-16T12:29:35.113797546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:29:35.114176 containerd[2004]: time="2025-12-16T12:29:35.113815210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:29:35.114250 kubelet[3334]: E1216 12:29:35.114152 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:35.114250 kubelet[3334]: E1216 12:29:35.114227 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:29:35.114781 kubelet[3334]: E1216 12:29:35.114344 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c59c4c686-xwscr_calico-apiserver(4b0a11dd-5ec4-458d-86dc-437a0146fd85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:29:35.114781 kubelet[3334]: E1216 12:29:35.114396 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:29:35.675540 systemd[1]: Started sshd@21-172.31.24.3:22-139.178.89.65:48438.service - OpenSSH per-connection server daemon (139.178.89.65:48438). Dec 16 12:29:35.875725 sshd[5821]: Accepted publickey for core from 139.178.89.65 port 48438 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:35.878282 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:35.888094 systemd-logind[1973]: New session 22 of user core. Dec 16 12:29:35.892228 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:29:36.185640 sshd[5824]: Connection closed by 139.178.89.65 port 48438 Dec 16 12:29:36.186394 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:36.194064 systemd-logind[1973]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:29:36.194926 systemd[1]: sshd@21-172.31.24.3:22-139.178.89.65:48438.service: Deactivated successfully. Dec 16 12:29:36.201628 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:29:36.206655 systemd-logind[1973]: Removed session 22. Dec 16 12:29:36.842299 kubelet[3334]: E1216 12:29:36.842124 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:29:40.839821 kubelet[3334]: E1216 12:29:40.839661 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:29:41.237500 systemd[1]: Started sshd@22-172.31.24.3:22-139.178.89.65:39568.service - OpenSSH per-connection server daemon (139.178.89.65:39568). Dec 16 12:29:41.460888 sshd[5836]: Accepted publickey for core from 139.178.89.65 port 39568 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:41.464380 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:41.476277 systemd-logind[1973]: New session 23 of user core. Dec 16 12:29:41.482605 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:29:41.818435 sshd[5839]: Connection closed by 139.178.89.65 port 39568 Dec 16 12:29:41.823592 sshd-session[5836]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:41.835604 systemd-logind[1973]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:29:41.836859 systemd[1]: sshd@22-172.31.24.3:22-139.178.89.65:39568.service: Deactivated successfully. Dec 16 12:29:41.845240 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:29:41.854593 systemd-logind[1973]: Removed session 23. Dec 16 12:29:42.844032 kubelet[3334]: E1216 12:29:42.842824 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:29:44.842996 kubelet[3334]: E1216 12:29:44.842697 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m7qfq" podUID="59a087e1-e448-441a-b97a-fe80bf31dd45" Dec 16 12:29:44.854129 kubelet[3334]: E1216 12:29:44.853935 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:29:46.851563 kubelet[3334]: E1216 12:29:46.849052 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:29:46.868716 systemd[1]: Started sshd@23-172.31.24.3:22-139.178.89.65:39580.service - OpenSSH per-connection server daemon (139.178.89.65:39580). Dec 16 12:29:47.101173 sshd[5873]: Accepted publickey for core from 139.178.89.65 port 39580 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:47.104654 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:47.115072 systemd-logind[1973]: New session 24 of user core. Dec 16 12:29:47.125323 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:29:47.398876 sshd[5876]: Connection closed by 139.178.89.65 port 39580 Dec 16 12:29:47.398655 sshd-session[5873]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:47.410185 systemd[1]: sshd@23-172.31.24.3:22-139.178.89.65:39580.service: Deactivated successfully. Dec 16 12:29:47.418527 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:29:47.423123 systemd-logind[1973]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:29:47.427046 systemd-logind[1973]: Removed session 24. Dec 16 12:29:49.845320 kubelet[3334]: E1216 12:29:49.845209 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:29:52.441639 systemd[1]: Started sshd@24-172.31.24.3:22-139.178.89.65:53744.service - OpenSSH per-connection server daemon (139.178.89.65:53744). Dec 16 12:29:52.670522 sshd[5892]: Accepted publickey for core from 139.178.89.65 port 53744 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:52.673224 sshd-session[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:52.689298 systemd-logind[1973]: New session 25 of user core. Dec 16 12:29:52.694296 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:29:53.005025 sshd[5895]: Connection closed by 139.178.89.65 port 53744 Dec 16 12:29:53.005442 sshd-session[5892]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:53.015354 systemd[1]: sshd@24-172.31.24.3:22-139.178.89.65:53744.service: Deactivated successfully. Dec 16 12:29:53.024814 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:29:53.030853 systemd-logind[1973]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:29:53.033599 systemd-logind[1973]: Removed session 25. Dec 16 12:29:53.838398 kubelet[3334]: E1216 12:29:53.838340 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:29:56.840993 kubelet[3334]: E1216 12:29:56.840546 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:29:57.841255 kubelet[3334]: E1216 12:29:57.841161 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:29:58.044807 systemd[1]: Started sshd@25-172.31.24.3:22-139.178.89.65:53752.service - OpenSSH per-connection server daemon (139.178.89.65:53752). Dec 16 12:29:58.261465 sshd[5910]: Accepted publickey for core from 139.178.89.65 port 53752 ssh2: RSA SHA256:xUh8ykt9z5cbsZrCvFOBrVTYezXLnfIWheoQb+9aZNE Dec 16 12:29:58.264598 sshd-session[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:58.276461 systemd-logind[1973]: New session 26 of user core. Dec 16 12:29:58.284355 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:29:58.650619 sshd[5913]: Connection closed by 139.178.89.65 port 53752 Dec 16 12:29:58.661259 sshd-session[5910]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:58.669682 systemd[1]: sshd@25-172.31.24.3:22-139.178.89.65:53752.service: Deactivated successfully. Dec 16 12:29:58.676206 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:29:58.681520 systemd-logind[1973]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:29:58.687572 systemd-logind[1973]: Removed session 26. Dec 16 12:29:58.843779 kubelet[3334]: E1216 12:29:58.842850 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m7qfq" podUID="59a087e1-e448-441a-b97a-fe80bf31dd45" Dec 16 12:29:59.839472 kubelet[3334]: E1216 12:29:59.839323 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:30:02.845317 kubelet[3334]: E1216 12:30:02.845188 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:30:08.845068 kubelet[3334]: E1216 12:30:08.844311 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-57cf7db4b7-6r27k" podUID="c54f17de-062d-4ea0-b0e3-144077363c3e" Dec 16 12:30:10.838541 kubelet[3334]: E1216 12:30:10.838455 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-xwscr" podUID="4b0a11dd-5ec4-458d-86dc-437a0146fd85" Dec 16 12:30:11.838502 kubelet[3334]: E1216 12:30:11.838398 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba" Dec 16 12:30:11.964400 kubelet[3334]: E1216 12:30:11.964309 3334 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-3?timeout=10s\": context deadline exceeded" Dec 16 12:30:12.448452 systemd[1]: cri-containerd-9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8.scope: Deactivated successfully. Dec 16 12:30:12.449572 systemd[1]: cri-containerd-9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8.scope: Consumed 31.120s CPU time, 101.4M memory peak. Dec 16 12:30:12.455752 containerd[2004]: time="2025-12-16T12:30:12.455617775Z" level=info msg="received container exit event container_id:\"9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8\" id:\"9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8\" pid:3934 exit_status:1 exited_at:{seconds:1765888212 nanos:455020415}" Dec 16 12:30:12.497300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8-rootfs.mount: Deactivated successfully. Dec 16 12:30:12.788556 kubelet[3334]: I1216 12:30:12.788429 3334 scope.go:117] "RemoveContainer" containerID="9866f80caf7d44495848a5d5c61dd7fe8a29c8b20f80f285d0655e4689a9b8a8" Dec 16 12:30:12.792086 containerd[2004]: time="2025-12-16T12:30:12.791860969Z" level=info msg="CreateContainer within sandbox \"6bf9b126a452fe2f813681f29787dedb70b666573148ec3be0da24474d515792\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 16 12:30:12.812761 containerd[2004]: time="2025-12-16T12:30:12.811249189Z" level=info msg="Container 64d3383236c3daf6e7b0f815e5b48efbae1d92d0f128f87a208239ee50ce4b3f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:12.828034 containerd[2004]: time="2025-12-16T12:30:12.827943361Z" level=info msg="CreateContainer within sandbox \"6bf9b126a452fe2f813681f29787dedb70b666573148ec3be0da24474d515792\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"64d3383236c3daf6e7b0f815e5b48efbae1d92d0f128f87a208239ee50ce4b3f\"" Dec 16 12:30:12.830003 containerd[2004]: time="2025-12-16T12:30:12.829180777Z" level=info msg="StartContainer for \"64d3383236c3daf6e7b0f815e5b48efbae1d92d0f128f87a208239ee50ce4b3f\"" Dec 16 12:30:12.831208 containerd[2004]: time="2025-12-16T12:30:12.831150577Z" level=info msg="connecting to shim 64d3383236c3daf6e7b0f815e5b48efbae1d92d0f128f87a208239ee50ce4b3f" address="unix:///run/containerd/s/57fd99bc8ae684a7d7d6d7a7d86b416e555bb182349f4f70fb12fa35e774f4cf" protocol=ttrpc version=3 Dec 16 12:30:12.854098 containerd[2004]: time="2025-12-16T12:30:12.854052553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:30:12.883301 systemd[1]: Started cri-containerd-64d3383236c3daf6e7b0f815e5b48efbae1d92d0f128f87a208239ee50ce4b3f.scope - libcontainer container 64d3383236c3daf6e7b0f815e5b48efbae1d92d0f128f87a208239ee50ce4b3f. Dec 16 12:30:12.947999 containerd[2004]: time="2025-12-16T12:30:12.947897978Z" level=info msg="StartContainer for \"64d3383236c3daf6e7b0f815e5b48efbae1d92d0f128f87a208239ee50ce4b3f\" returns successfully" Dec 16 12:30:13.121561 containerd[2004]: time="2025-12-16T12:30:13.121369079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:13.123687 containerd[2004]: time="2025-12-16T12:30:13.123617939Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:30:13.123830 containerd[2004]: time="2025-12-16T12:30:13.123755627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:30:13.124173 kubelet[3334]: E1216 12:30:13.124030 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:13.124173 kubelet[3334]: E1216 12:30:13.124093 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:13.125259 kubelet[3334]: E1216 12:30:13.124890 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:13.126451 containerd[2004]: time="2025-12-16T12:30:13.126155243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:30:13.435241 containerd[2004]: time="2025-12-16T12:30:13.435157284Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:13.437365 containerd[2004]: time="2025-12-16T12:30:13.437299368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:30:13.437479 containerd[2004]: time="2025-12-16T12:30:13.437411088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:30:13.437688 kubelet[3334]: E1216 12:30:13.437633 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:13.437825 kubelet[3334]: E1216 12:30:13.437700 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:13.437911 kubelet[3334]: E1216 12:30:13.437811 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-qps4l_calico-system(0821a17d-3c03-4228-a061-1c97b86f544e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:13.437911 kubelet[3334]: E1216 12:30:13.437884 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qps4l" podUID="0821a17d-3c03-4228-a061-1c97b86f544e" Dec 16 12:30:13.501594 systemd[1]: cri-containerd-025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18.scope: Deactivated successfully. Dec 16 12:30:13.502533 systemd[1]: cri-containerd-025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18.scope: Consumed 6.441s CPU time, 62.1M memory peak. Dec 16 12:30:13.507656 containerd[2004]: time="2025-12-16T12:30:13.507291037Z" level=info msg="received container exit event container_id:\"025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18\" id:\"025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18\" pid:3150 exit_status:1 exited_at:{seconds:1765888213 nanos:506844637}" Dec 16 12:30:13.555869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18-rootfs.mount: Deactivated successfully. Dec 16 12:30:13.803800 kubelet[3334]: I1216 12:30:13.802314 3334 scope.go:117] "RemoveContainer" containerID="025533ff14b9a7c4c8fd17fdda939f91cf40e0d01d64787f200c85d16d545f18" Dec 16 12:30:13.807755 containerd[2004]: time="2025-12-16T12:30:13.807637046Z" level=info msg="CreateContainer within sandbox \"9d5d0d4348b99487e193955e28d8508d9a3d921b5f4427aeac1d4b24a497e841\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 16 12:30:13.829024 containerd[2004]: time="2025-12-16T12:30:13.827795606Z" level=info msg="Container 1f8ee343cb8bcdc4719e64e6adb42ff5637758accb58ade2f5beae2d69b0c10b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:13.842666 containerd[2004]: time="2025-12-16T12:30:13.842594342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:30:13.851706 containerd[2004]: time="2025-12-16T12:30:13.851642870Z" level=info msg="CreateContainer within sandbox \"9d5d0d4348b99487e193955e28d8508d9a3d921b5f4427aeac1d4b24a497e841\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1f8ee343cb8bcdc4719e64e6adb42ff5637758accb58ade2f5beae2d69b0c10b\"" Dec 16 12:30:13.852750 containerd[2004]: time="2025-12-16T12:30:13.852678914Z" level=info msg="StartContainer for \"1f8ee343cb8bcdc4719e64e6adb42ff5637758accb58ade2f5beae2d69b0c10b\"" Dec 16 12:30:13.855833 containerd[2004]: time="2025-12-16T12:30:13.855768902Z" level=info msg="connecting to shim 1f8ee343cb8bcdc4719e64e6adb42ff5637758accb58ade2f5beae2d69b0c10b" address="unix:///run/containerd/s/44447338c27c1e5fcab5c7b15ecfeca85c7baa9f96ead163346256089a8a7cb4" protocol=ttrpc version=3 Dec 16 12:30:13.909300 systemd[1]: Started cri-containerd-1f8ee343cb8bcdc4719e64e6adb42ff5637758accb58ade2f5beae2d69b0c10b.scope - libcontainer container 1f8ee343cb8bcdc4719e64e6adb42ff5637758accb58ade2f5beae2d69b0c10b. Dec 16 12:30:14.007069 containerd[2004]: time="2025-12-16T12:30:14.007002407Z" level=info msg="StartContainer for \"1f8ee343cb8bcdc4719e64e6adb42ff5637758accb58ade2f5beae2d69b0c10b\" returns successfully" Dec 16 12:30:14.114363 containerd[2004]: time="2025-12-16T12:30:14.114175788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:14.116517 containerd[2004]: time="2025-12-16T12:30:14.116412504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:30:14.117099 containerd[2004]: time="2025-12-16T12:30:14.116429016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:14.117167 kubelet[3334]: E1216 12:30:14.116758 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:14.117167 kubelet[3334]: E1216 12:30:14.116818 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:14.117167 kubelet[3334]: E1216 12:30:14.116938 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-m7qfq_calico-system(59a087e1-e448-441a-b97a-fe80bf31dd45): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:14.117532 kubelet[3334]: E1216 12:30:14.117434 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-m7qfq" podUID="59a087e1-e448-441a-b97a-fe80bf31dd45" Dec 16 12:30:17.839335 containerd[2004]: time="2025-12-16T12:30:17.839001834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:30:18.147644 containerd[2004]: time="2025-12-16T12:30:18.147211216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:18.149345 containerd[2004]: time="2025-12-16T12:30:18.149275888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:30:18.149461 containerd[2004]: time="2025-12-16T12:30:18.149395948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:30:18.149662 kubelet[3334]: E1216 12:30:18.149602 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:18.150204 kubelet[3334]: E1216 12:30:18.149672 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:18.150204 kubelet[3334]: E1216 12:30:18.149860 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7c5c54bb9-nqsf9_calico-system(582509e9-d0bd-4a8e-a8bd-67905f25b45b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:18.151688 containerd[2004]: time="2025-12-16T12:30:18.151576336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:30:18.444991 containerd[2004]: time="2025-12-16T12:30:18.444908969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:18.447255 containerd[2004]: time="2025-12-16T12:30:18.447194633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:30:18.447378 containerd[2004]: time="2025-12-16T12:30:18.447317693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:18.447594 kubelet[3334]: E1216 12:30:18.447514 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:18.447594 kubelet[3334]: E1216 12:30:18.447584 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:18.447847 kubelet[3334]: E1216 12:30:18.447762 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7c5c54bb9-nqsf9_calico-system(582509e9-d0bd-4a8e-a8bd-67905f25b45b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:18.447983 kubelet[3334]: E1216 12:30:18.447907 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7c5c54bb9-nqsf9" podUID="582509e9-d0bd-4a8e-a8bd-67905f25b45b" Dec 16 12:30:18.460526 systemd[1]: cri-containerd-29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d.scope: Deactivated successfully. Dec 16 12:30:18.461174 systemd[1]: cri-containerd-29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d.scope: Consumed 7.599s CPU time, 20.4M memory peak. Dec 16 12:30:18.468744 containerd[2004]: time="2025-12-16T12:30:18.468693653Z" level=info msg="received container exit event container_id:\"29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d\" id:\"29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d\" pid:3174 exit_status:1 exited_at:{seconds:1765888218 nanos:468180173}" Dec 16 12:30:18.515941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d-rootfs.mount: Deactivated successfully. Dec 16 12:30:18.831903 kubelet[3334]: I1216 12:30:18.831768 3334 scope.go:117] "RemoveContainer" containerID="29f4cfd9003d8a8f9b76f9bdaa0938f2a2e6fa569a1a20b44dad42896ea1b92d" Dec 16 12:30:18.837328 containerd[2004]: time="2025-12-16T12:30:18.837281395Z" level=info msg="CreateContainer within sandbox \"0dc641b8321a13eae50ffb4b3934a9a6471baffdd0f4b3ce786975d211ed72e8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 16 12:30:18.861531 containerd[2004]: time="2025-12-16T12:30:18.859395919Z" level=info msg="Container 341504b4065600e2987ace8d70e7580ea761c127e8eb26b98a7609e7b04dbc1c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:18.881798 containerd[2004]: time="2025-12-16T12:30:18.881719903Z" level=info msg="CreateContainer within sandbox \"0dc641b8321a13eae50ffb4b3934a9a6471baffdd0f4b3ce786975d211ed72e8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"341504b4065600e2987ace8d70e7580ea761c127e8eb26b98a7609e7b04dbc1c\"" Dec 16 12:30:18.882601 containerd[2004]: time="2025-12-16T12:30:18.882510571Z" level=info msg="StartContainer for \"341504b4065600e2987ace8d70e7580ea761c127e8eb26b98a7609e7b04dbc1c\"" Dec 16 12:30:18.884865 containerd[2004]: time="2025-12-16T12:30:18.884796139Z" level=info msg="connecting to shim 341504b4065600e2987ace8d70e7580ea761c127e8eb26b98a7609e7b04dbc1c" address="unix:///run/containerd/s/ac2bc0be54df86dfe383c6c77f52f63d2f6a1168bd7b1e8d345ad2a2f8cae64c" protocol=ttrpc version=3 Dec 16 12:30:18.926265 systemd[1]: Started cri-containerd-341504b4065600e2987ace8d70e7580ea761c127e8eb26b98a7609e7b04dbc1c.scope - libcontainer container 341504b4065600e2987ace8d70e7580ea761c127e8eb26b98a7609e7b04dbc1c. Dec 16 12:30:19.018941 containerd[2004]: time="2025-12-16T12:30:19.018750016Z" level=info msg="StartContainer for \"341504b4065600e2987ace8d70e7580ea761c127e8eb26b98a7609e7b04dbc1c\" returns successfully" Dec 16 12:30:21.965596 kubelet[3334]: E1216 12:30:21.965118 3334 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-3?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 12:30:22.840131 containerd[2004]: time="2025-12-16T12:30:22.839632079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:23.146051 containerd[2004]: time="2025-12-16T12:30:23.145868205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:23.150289 containerd[2004]: time="2025-12-16T12:30:23.150168297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:23.150289 containerd[2004]: time="2025-12-16T12:30:23.150235605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:23.150668 kubelet[3334]: E1216 12:30:23.150574 3334 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:23.151297 kubelet[3334]: E1216 12:30:23.150668 3334 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:23.151297 kubelet[3334]: E1216 12:30:23.150823 3334 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c59c4c686-nm8c9_calico-apiserver(b28336ef-bcfa-4481-a4ab-447af79aaaba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:23.151297 kubelet[3334]: E1216 12:30:23.150878 3334 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c59c4c686-nm8c9" podUID="b28336ef-bcfa-4481-a4ab-447af79aaaba"