Nov 23 22:57:03.195804 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 23 22:57:03.195856 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:49:09 -00 2025 Nov 23 22:57:03.195880 kernel: KASLR disabled due to lack of seed Nov 23 22:57:03.195896 kernel: efi: EFI v2.7 by EDK II Nov 23 22:57:03.195912 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Nov 23 22:57:03.195927 kernel: secureboot: Secure boot disabled Nov 23 22:57:03.195945 kernel: ACPI: Early table checksum verification disabled Nov 23 22:57:03.195961 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 23 22:57:03.195977 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 23 22:57:03.195992 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 23 22:57:03.196008 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 23 22:57:03.196028 kernel: ACPI: FACS 0x0000000078630000 000040 Nov 23 22:57:03.196043 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 23 22:57:03.196059 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 23 22:57:03.196077 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 23 22:57:03.196219 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 23 22:57:03.196246 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 23 22:57:03.196263 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 23 22:57:03.196279 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 23 22:57:03.196295 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 23 22:57:03.196311 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 23 22:57:03.196327 kernel: printk: legacy bootconsole [uart0] enabled Nov 23 22:57:03.196343 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 22:57:03.196361 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 23 22:57:03.196377 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Nov 23 22:57:03.196394 kernel: Zone ranges: Nov 23 22:57:03.196410 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 23 22:57:03.196430 kernel: DMA32 empty Nov 23 22:57:03.196446 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 23 22:57:03.196462 kernel: Device empty Nov 23 22:57:03.196477 kernel: Movable zone start for each node Nov 23 22:57:03.196493 kernel: Early memory node ranges Nov 23 22:57:03.196509 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 23 22:57:03.196525 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 23 22:57:03.196541 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 23 22:57:03.196556 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 23 22:57:03.196572 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 23 22:57:03.196588 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 23 22:57:03.196604 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 23 22:57:03.196624 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 23 22:57:03.196647 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 23 22:57:03.196664 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 23 22:57:03.196681 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Nov 23 22:57:03.196698 kernel: psci: probing for conduit method from ACPI. Nov 23 22:57:03.196719 kernel: psci: PSCIv1.0 detected in firmware. Nov 23 22:57:03.196735 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 22:57:03.196752 kernel: psci: Trusted OS migration not required Nov 23 22:57:03.196769 kernel: psci: SMC Calling Convention v1.1 Nov 23 22:57:03.196786 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 23 22:57:03.196803 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 22:57:03.196820 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 22:57:03.196837 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 22:57:03.196854 kernel: Detected PIPT I-cache on CPU0 Nov 23 22:57:03.196871 kernel: CPU features: detected: GIC system register CPU interface Nov 23 22:57:03.196887 kernel: CPU features: detected: Spectre-v2 Nov 23 22:57:03.196908 kernel: CPU features: detected: Spectre-v3a Nov 23 22:57:03.196925 kernel: CPU features: detected: Spectre-BHB Nov 23 22:57:03.196942 kernel: CPU features: detected: ARM erratum 1742098 Nov 23 22:57:03.196959 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 23 22:57:03.196975 kernel: alternatives: applying boot alternatives Nov 23 22:57:03.196994 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:57:03.197012 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 22:57:03.197029 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 22:57:03.197046 kernel: Fallback order for Node 0: 0 Nov 23 22:57:03.197063 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Nov 23 22:57:03.197080 kernel: Policy zone: Normal Nov 23 22:57:03.197133 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 22:57:03.197152 kernel: software IO TLB: area num 2. Nov 23 22:57:03.197169 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Nov 23 22:57:03.197185 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 22:57:03.197204 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 22:57:03.197222 kernel: rcu: RCU event tracing is enabled. Nov 23 22:57:03.197240 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 22:57:03.197257 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 22:57:03.197274 kernel: Tracing variant of Tasks RCU enabled. Nov 23 22:57:03.197292 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 22:57:03.197309 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 22:57:03.197331 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:57:03.197348 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:57:03.197365 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 22:57:03.197382 kernel: GICv3: 96 SPIs implemented Nov 23 22:57:03.197399 kernel: GICv3: 0 Extended SPIs implemented Nov 23 22:57:03.197415 kernel: Root IRQ handler: gic_handle_irq Nov 23 22:57:03.197432 kernel: GICv3: GICv3 features: 16 PPIs Nov 23 22:57:03.197449 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 22:57:03.197466 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 23 22:57:03.197482 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 23 22:57:03.197500 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Nov 23 22:57:03.197518 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Nov 23 22:57:03.197541 kernel: GICv3: using LPI property table @0x0000000400110000 Nov 23 22:57:03.197558 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 23 22:57:03.197574 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Nov 23 22:57:03.197592 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 22:57:03.197609 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 23 22:57:03.197626 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 23 22:57:03.197643 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 23 22:57:03.197660 kernel: Console: colour dummy device 80x25 Nov 23 22:57:03.197677 kernel: printk: legacy console [tty1] enabled Nov 23 22:57:03.197700 kernel: ACPI: Core revision 20240827 Nov 23 22:57:03.197719 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 23 22:57:03.197741 kernel: pid_max: default: 32768 minimum: 301 Nov 23 22:57:03.197759 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 22:57:03.197776 kernel: landlock: Up and running. Nov 23 22:57:03.197793 kernel: SELinux: Initializing. Nov 23 22:57:03.197811 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:57:03.197828 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:57:03.197845 kernel: rcu: Hierarchical SRCU implementation. Nov 23 22:57:03.197862 kernel: rcu: Max phase no-delay instances is 400. Nov 23 22:57:03.197883 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 22:57:03.197901 kernel: Remapping and enabling EFI services. Nov 23 22:57:03.197917 kernel: smp: Bringing up secondary CPUs ... Nov 23 22:57:03.197934 kernel: Detected PIPT I-cache on CPU1 Nov 23 22:57:03.197951 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 23 22:57:03.197968 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Nov 23 22:57:03.197985 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 23 22:57:03.198002 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 22:57:03.198019 kernel: SMP: Total of 2 processors activated. Nov 23 22:57:03.198040 kernel: CPU: All CPU(s) started at EL1 Nov 23 22:57:03.198118 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 22:57:03.198140 kernel: CPU features: detected: 32-bit EL1 Support Nov 23 22:57:03.198165 kernel: CPU features: detected: CRC32 instructions Nov 23 22:57:03.198183 kernel: alternatives: applying system-wide alternatives Nov 23 22:57:03.198203 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Nov 23 22:57:03.198221 kernel: devtmpfs: initialized Nov 23 22:57:03.198239 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 22:57:03.198262 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 22:57:03.198280 kernel: 16880 pages in range for non-PLT usage Nov 23 22:57:03.198298 kernel: 508400 pages in range for PLT usage Nov 23 22:57:03.198316 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 22:57:03.198334 kernel: SMBIOS 3.0.0 present. Nov 23 22:57:03.198352 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 23 22:57:03.198370 kernel: DMI: Memory slots populated: 0/0 Nov 23 22:57:03.198388 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 22:57:03.198405 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 22:57:03.198427 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 22:57:03.198445 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 22:57:03.198463 kernel: audit: initializing netlink subsys (disabled) Nov 23 22:57:03.198481 kernel: audit: type=2000 audit(0.228:1): state=initialized audit_enabled=0 res=1 Nov 23 22:57:03.198499 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 22:57:03.198517 kernel: cpuidle: using governor menu Nov 23 22:57:03.198535 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 22:57:03.198552 kernel: ASID allocator initialised with 65536 entries Nov 23 22:57:03.198570 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 22:57:03.198592 kernel: Serial: AMBA PL011 UART driver Nov 23 22:57:03.198610 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 22:57:03.198628 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 22:57:03.198646 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 22:57:03.198664 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 22:57:03.198682 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 22:57:03.198700 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 22:57:03.198718 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 22:57:03.198736 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 22:57:03.198758 kernel: ACPI: Added _OSI(Module Device) Nov 23 22:57:03.198776 kernel: ACPI: Added _OSI(Processor Device) Nov 23 22:57:03.198794 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 22:57:03.198812 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 22:57:03.198830 kernel: ACPI: Interpreter enabled Nov 23 22:57:03.198847 kernel: ACPI: Using GIC for interrupt routing Nov 23 22:57:03.198865 kernel: ACPI: MCFG table detected, 1 entries Nov 23 22:57:03.198883 kernel: ACPI: CPU0 has been hot-added Nov 23 22:57:03.198900 kernel: ACPI: CPU1 has been hot-added Nov 23 22:57:03.198923 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Nov 23 22:57:03.199297 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 22:57:03.199506 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 22:57:03.199749 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 22:57:03.199964 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Nov 23 22:57:03.200236 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Nov 23 22:57:03.200270 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 23 22:57:03.200299 kernel: acpiphp: Slot [1] registered Nov 23 22:57:03.200319 kernel: acpiphp: Slot [2] registered Nov 23 22:57:03.200338 kernel: acpiphp: Slot [3] registered Nov 23 22:57:03.200356 kernel: acpiphp: Slot [4] registered Nov 23 22:57:03.200375 kernel: acpiphp: Slot [5] registered Nov 23 22:57:03.200392 kernel: acpiphp: Slot [6] registered Nov 23 22:57:03.200410 kernel: acpiphp: Slot [7] registered Nov 23 22:57:03.200428 kernel: acpiphp: Slot [8] registered Nov 23 22:57:03.200445 kernel: acpiphp: Slot [9] registered Nov 23 22:57:03.200463 kernel: acpiphp: Slot [10] registered Nov 23 22:57:03.200605 kernel: acpiphp: Slot [11] registered Nov 23 22:57:03.202926 kernel: acpiphp: Slot [12] registered Nov 23 22:57:03.202947 kernel: acpiphp: Slot [13] registered Nov 23 22:57:03.202967 kernel: acpiphp: Slot [14] registered Nov 23 22:57:03.202985 kernel: acpiphp: Slot [15] registered Nov 23 22:57:03.203003 kernel: acpiphp: Slot [16] registered Nov 23 22:57:03.203021 kernel: acpiphp: Slot [17] registered Nov 23 22:57:03.203039 kernel: acpiphp: Slot [18] registered Nov 23 22:57:03.203056 kernel: acpiphp: Slot [19] registered Nov 23 22:57:03.203085 kernel: acpiphp: Slot [20] registered Nov 23 22:57:03.203130 kernel: acpiphp: Slot [21] registered Nov 23 22:57:03.203148 kernel: acpiphp: Slot [22] registered Nov 23 22:57:03.203166 kernel: acpiphp: Slot [23] registered Nov 23 22:57:03.203184 kernel: acpiphp: Slot [24] registered Nov 23 22:57:03.203205 kernel: acpiphp: Slot [25] registered Nov 23 22:57:03.203223 kernel: acpiphp: Slot [26] registered Nov 23 22:57:03.203241 kernel: acpiphp: Slot [27] registered Nov 23 22:57:03.203260 kernel: acpiphp: Slot [28] registered Nov 23 22:57:03.203278 kernel: acpiphp: Slot [29] registered Nov 23 22:57:03.203303 kernel: acpiphp: Slot [30] registered Nov 23 22:57:03.203321 kernel: acpiphp: Slot [31] registered Nov 23 22:57:03.203340 kernel: PCI host bridge to bus 0000:00 Nov 23 22:57:03.203691 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 23 22:57:03.207755 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 22:57:03.207992 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 23 22:57:03.208241 kernel: pci_bus 0000:00: root bus resource [bus 00] Nov 23 22:57:03.208506 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Nov 23 22:57:03.208734 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Nov 23 22:57:03.208934 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Nov 23 22:57:03.216042 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Nov 23 22:57:03.218284 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Nov 23 22:57:03.218492 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 23 22:57:03.218713 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Nov 23 22:57:03.218905 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Nov 23 22:57:03.219133 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Nov 23 22:57:03.219339 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Nov 23 22:57:03.219529 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 23 22:57:03.219706 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 23 22:57:03.219876 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 22:57:03.220052 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 23 22:57:03.220077 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 22:57:03.220119 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 22:57:03.220140 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 22:57:03.220159 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 22:57:03.220178 kernel: iommu: Default domain type: Translated Nov 23 22:57:03.220196 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 22:57:03.220214 kernel: efivars: Registered efivars operations Nov 23 22:57:03.220232 kernel: vgaarb: loaded Nov 23 22:57:03.220256 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 22:57:03.220274 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 22:57:03.220292 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 22:57:03.220310 kernel: pnp: PnP ACPI init Nov 23 22:57:03.220524 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 23 22:57:03.220551 kernel: pnp: PnP ACPI: found 1 devices Nov 23 22:57:03.220569 kernel: NET: Registered PF_INET protocol family Nov 23 22:57:03.220587 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 22:57:03.220610 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 22:57:03.220629 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 22:57:03.220647 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 22:57:03.220664 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 22:57:03.220682 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 22:57:03.220700 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:57:03.220718 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:57:03.220736 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 22:57:03.220753 kernel: PCI: CLS 0 bytes, default 64 Nov 23 22:57:03.220775 kernel: kvm [1]: HYP mode not available Nov 23 22:57:03.220793 kernel: Initialise system trusted keyrings Nov 23 22:57:03.220810 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 22:57:03.220828 kernel: Key type asymmetric registered Nov 23 22:57:03.220846 kernel: Asymmetric key parser 'x509' registered Nov 23 22:57:03.220864 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 22:57:03.220882 kernel: io scheduler mq-deadline registered Nov 23 22:57:03.220900 kernel: io scheduler kyber registered Nov 23 22:57:03.220917 kernel: io scheduler bfq registered Nov 23 22:57:03.221142 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 23 22:57:03.221171 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 22:57:03.221189 kernel: ACPI: button: Power Button [PWRB] Nov 23 22:57:03.221207 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 23 22:57:03.221225 kernel: ACPI: button: Sleep Button [SLPB] Nov 23 22:57:03.221243 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 22:57:03.221262 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 23 22:57:03.221458 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 23 22:57:03.221489 kernel: printk: legacy console [ttyS0] disabled Nov 23 22:57:03.221508 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 23 22:57:03.221526 kernel: printk: legacy console [ttyS0] enabled Nov 23 22:57:03.221543 kernel: printk: legacy bootconsole [uart0] disabled Nov 23 22:57:03.221561 kernel: thunder_xcv, ver 1.0 Nov 23 22:57:03.221578 kernel: thunder_bgx, ver 1.0 Nov 23 22:57:03.221596 kernel: nicpf, ver 1.0 Nov 23 22:57:03.221613 kernel: nicvf, ver 1.0 Nov 23 22:57:03.221816 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 22:57:03.221997 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T22:57:02 UTC (1763938622) Nov 23 22:57:03.222022 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 22:57:03.222040 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Nov 23 22:57:03.222123 kernel: watchdog: NMI not fully supported Nov 23 22:57:03.222168 kernel: NET: Registered PF_INET6 protocol family Nov 23 22:57:03.222192 kernel: watchdog: Hard watchdog permanently disabled Nov 23 22:57:03.222211 kernel: Segment Routing with IPv6 Nov 23 22:57:03.222233 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 22:57:03.222251 kernel: NET: Registered PF_PACKET protocol family Nov 23 22:57:03.222277 kernel: Key type dns_resolver registered Nov 23 22:57:03.222294 kernel: registered taskstats version 1 Nov 23 22:57:03.222312 kernel: Loading compiled-in X.509 certificates Nov 23 22:57:03.222331 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 98b0841f2908e51633cd38699ad12796cadb7bd1' Nov 23 22:57:03.222350 kernel: Demotion targets for Node 0: null Nov 23 22:57:03.222367 kernel: Key type .fscrypt registered Nov 23 22:57:03.222386 kernel: Key type fscrypt-provisioning registered Nov 23 22:57:03.222403 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 22:57:03.222421 kernel: ima: Allocated hash algorithm: sha1 Nov 23 22:57:03.222444 kernel: ima: No architecture policies found Nov 23 22:57:03.222462 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 22:57:03.222480 kernel: clk: Disabling unused clocks Nov 23 22:57:03.222498 kernel: PM: genpd: Disabling unused power domains Nov 23 22:57:03.222516 kernel: Warning: unable to open an initial console. Nov 23 22:57:03.222534 kernel: Freeing unused kernel memory: 39552K Nov 23 22:57:03.222552 kernel: Run /init as init process Nov 23 22:57:03.222569 kernel: with arguments: Nov 23 22:57:03.222587 kernel: /init Nov 23 22:57:03.222608 kernel: with environment: Nov 23 22:57:03.222625 kernel: HOME=/ Nov 23 22:57:03.222643 kernel: TERM=linux Nov 23 22:57:03.222663 systemd[1]: Successfully made /usr/ read-only. Nov 23 22:57:03.222687 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:57:03.222708 systemd[1]: Detected virtualization amazon. Nov 23 22:57:03.222727 systemd[1]: Detected architecture arm64. Nov 23 22:57:03.222750 systemd[1]: Running in initrd. Nov 23 22:57:03.222769 systemd[1]: No hostname configured, using default hostname. Nov 23 22:57:03.222789 systemd[1]: Hostname set to . Nov 23 22:57:03.222807 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:57:03.222826 systemd[1]: Queued start job for default target initrd.target. Nov 23 22:57:03.222845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:57:03.222864 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:57:03.222884 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 22:57:03.222907 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:57:03.222926 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 22:57:03.222947 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 22:57:03.222968 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 22:57:03.222987 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 22:57:03.223006 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:57:03.223025 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:57:03.223047 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:57:03.223066 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:57:03.223085 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:57:03.223135 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:57:03.223155 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:57:03.223175 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:57:03.223194 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 22:57:03.223213 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 22:57:03.223233 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:57:03.223259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:57:03.223278 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:57:03.223297 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:57:03.223316 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 22:57:03.223335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:57:03.223354 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 22:57:03.223374 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 22:57:03.223393 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 22:57:03.223417 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:57:03.223436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:57:03.223455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:57:03.223473 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 22:57:03.223494 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:57:03.223517 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 22:57:03.223537 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 22:57:03.223597 systemd-journald[259]: Collecting audit messages is disabled. Nov 23 22:57:03.223639 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 22:57:03.223663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:57:03.223682 kernel: Bridge firewalling registered Nov 23 22:57:03.223700 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:57:03.223720 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:57:03.223739 systemd-journald[259]: Journal started Nov 23 22:57:03.223775 systemd-journald[259]: Runtime Journal (/run/log/journal/ec2ff39cf7b9c4ec59b0807414df6d96) is 8M, max 75.3M, 67.3M free. Nov 23 22:57:03.174767 systemd-modules-load[260]: Inserted module 'overlay' Nov 23 22:57:03.210278 systemd-modules-load[260]: Inserted module 'br_netfilter' Nov 23 22:57:03.237798 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:57:03.243542 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 22:57:03.250612 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:57:03.254845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:57:03.267780 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:57:03.309728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:57:03.313617 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 22:57:03.326180 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:57:03.336470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:57:03.341058 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:57:03.347435 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 22:57:03.365322 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:57:03.401625 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:57:03.469416 systemd-resolved[300]: Positive Trust Anchors: Nov 23 22:57:03.469453 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:57:03.469514 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:57:03.585132 kernel: SCSI subsystem initialized Nov 23 22:57:03.595122 kernel: Loading iSCSI transport class v2.0-870. Nov 23 22:57:03.606143 kernel: iscsi: registered transport (tcp) Nov 23 22:57:03.628809 kernel: iscsi: registered transport (qla4xxx) Nov 23 22:57:03.628900 kernel: QLogic iSCSI HBA Driver Nov 23 22:57:03.665277 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:57:03.702430 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:57:03.710840 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:57:03.744133 kernel: random: crng init done Nov 23 22:57:03.744762 systemd-resolved[300]: Defaulting to hostname 'linux'. Nov 23 22:57:03.749338 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:57:03.754694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:57:03.831223 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 22:57:03.838114 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 22:57:03.933172 kernel: raid6: neonx8 gen() 6383 MB/s Nov 23 22:57:03.950145 kernel: raid6: neonx4 gen() 6355 MB/s Nov 23 22:57:03.967134 kernel: raid6: neonx2 gen() 5324 MB/s Nov 23 22:57:03.984138 kernel: raid6: neonx1 gen() 3914 MB/s Nov 23 22:57:04.001135 kernel: raid6: int64x8 gen() 3626 MB/s Nov 23 22:57:04.018137 kernel: raid6: int64x4 gen() 3676 MB/s Nov 23 22:57:04.035156 kernel: raid6: int64x2 gen() 3530 MB/s Nov 23 22:57:04.053309 kernel: raid6: int64x1 gen() 2733 MB/s Nov 23 22:57:04.053382 kernel: raid6: using algorithm neonx8 gen() 6383 MB/s Nov 23 22:57:04.072466 kernel: raid6: .... xor() 4703 MB/s, rmw enabled Nov 23 22:57:04.072546 kernel: raid6: using neon recovery algorithm Nov 23 22:57:04.081156 kernel: xor: measuring software checksum speed Nov 23 22:57:04.081233 kernel: 8regs : 11498 MB/sec Nov 23 22:57:04.083383 kernel: 32regs : 12957 MB/sec Nov 23 22:57:04.084772 kernel: arm64_neon : 9160 MB/sec Nov 23 22:57:04.084837 kernel: xor: using function: 32regs (12957 MB/sec) Nov 23 22:57:04.184153 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 22:57:04.197717 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:57:04.205627 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:57:04.266231 systemd-udevd[508]: Using default interface naming scheme 'v255'. Nov 23 22:57:04.278910 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:57:04.291337 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 22:57:04.336892 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Nov 23 22:57:04.390227 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:57:04.397846 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:57:04.536178 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:57:04.553129 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 22:57:04.736343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:57:04.739020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:57:04.747588 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 23 22:57:04.747670 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 22:57:04.747698 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 23 22:57:04.749915 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 23 22:57:04.748793 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:57:04.765149 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 23 22:57:04.765550 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 23 22:57:04.755437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:57:04.764881 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:57:04.778123 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:4b:d1:e9:c2:45 Nov 23 22:57:04.784182 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 23 22:57:04.794883 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 22:57:04.795012 kernel: GPT:9289727 != 33554431 Nov 23 22:57:04.795046 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 22:57:04.795070 kernel: GPT:9289727 != 33554431 Nov 23 22:57:04.795123 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 22:57:04.795152 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:57:04.803977 (udev-worker)[576]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:57:04.831216 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:57:04.852150 kernel: nvme nvme0: using unchecked data buffer Nov 23 22:57:04.993774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 23 22:57:05.031622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 23 22:57:05.034687 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 23 22:57:05.082226 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 22:57:05.112661 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 23 22:57:05.139607 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 23 22:57:05.145790 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:57:05.152018 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:57:05.155647 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:57:05.164781 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 22:57:05.171287 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 22:57:05.215771 disk-uuid[686]: Primary Header is updated. Nov 23 22:57:05.215771 disk-uuid[686]: Secondary Entries is updated. Nov 23 22:57:05.215771 disk-uuid[686]: Secondary Header is updated. Nov 23 22:57:05.222421 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:57:05.217354 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:57:06.259135 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:57:06.261821 disk-uuid[693]: The operation has completed successfully. Nov 23 22:57:06.483456 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 22:57:06.484227 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 22:57:06.572787 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 22:57:06.606048 sh[953]: Success Nov 23 22:57:06.636334 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 22:57:06.636458 kernel: device-mapper: uevent: version 1.0.3 Nov 23 22:57:06.636509 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 22:57:06.651128 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 22:57:06.769714 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 22:57:06.776531 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 22:57:06.798666 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 22:57:06.821134 kernel: BTRFS: device fsid 9fed50bd-c943-4402-9e9a-f39625143eb9 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (976) Nov 23 22:57:06.826072 kernel: BTRFS info (device dm-0): first mount of filesystem 9fed50bd-c943-4402-9e9a-f39625143eb9 Nov 23 22:57:06.826266 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:57:06.853869 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 23 22:57:06.853953 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 22:57:06.855453 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 22:57:06.868330 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 22:57:06.872871 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:57:06.878132 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 22:57:06.884130 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 22:57:06.895317 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 22:57:06.948247 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1010) Nov 23 22:57:06.954064 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:06.954843 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:57:06.973883 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 22:57:06.974002 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 22:57:06.982160 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:06.984300 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 22:57:06.994537 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 22:57:07.105245 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:57:07.119304 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:57:07.225253 systemd-networkd[1149]: lo: Link UP Nov 23 22:57:07.225275 systemd-networkd[1149]: lo: Gained carrier Nov 23 22:57:07.231252 systemd-networkd[1149]: Enumeration completed Nov 23 22:57:07.233934 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:57:07.233941 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:57:07.243249 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:57:07.254445 systemd[1]: Reached target network.target - Network. Nov 23 22:57:07.264920 systemd-networkd[1149]: eth0: Link UP Nov 23 22:57:07.264942 systemd-networkd[1149]: eth0: Gained carrier Nov 23 22:57:07.264968 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:57:07.288571 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.17.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 23 22:57:07.314139 ignition[1082]: Ignition 2.22.0 Nov 23 22:57:07.314169 ignition[1082]: Stage: fetch-offline Nov 23 22:57:07.317492 ignition[1082]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:07.317519 ignition[1082]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:07.319798 ignition[1082]: Ignition finished successfully Nov 23 22:57:07.328209 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:57:07.334497 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 22:57:07.389154 ignition[1161]: Ignition 2.22.0 Nov 23 22:57:07.389711 ignition[1161]: Stage: fetch Nov 23 22:57:07.390775 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:07.390801 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:07.390968 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:07.407983 ignition[1161]: PUT result: OK Nov 23 22:57:07.412436 ignition[1161]: parsed url from cmdline: "" Nov 23 22:57:07.412594 ignition[1161]: no config URL provided Nov 23 22:57:07.412615 ignition[1161]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 22:57:07.412642 ignition[1161]: no config at "/usr/lib/ignition/user.ign" Nov 23 22:57:07.412908 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:07.422490 ignition[1161]: PUT result: OK Nov 23 22:57:07.422615 ignition[1161]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 23 22:57:07.427559 ignition[1161]: GET result: OK Nov 23 22:57:07.428731 ignition[1161]: parsing config with SHA512: b365c314c2da444f3f9c3708c3641e5994504551f23650215e73ceaec73b4041521f3d69b05ca6e8cede7a9479b06181cc4942170e657fe94cb422f55e67abbb Nov 23 22:57:07.442066 unknown[1161]: fetched base config from "system" Nov 23 22:57:07.442764 ignition[1161]: fetch: fetch complete Nov 23 22:57:07.442120 unknown[1161]: fetched base config from "system" Nov 23 22:57:07.442776 ignition[1161]: fetch: fetch passed Nov 23 22:57:07.442136 unknown[1161]: fetched user config from "aws" Nov 23 22:57:07.442864 ignition[1161]: Ignition finished successfully Nov 23 22:57:07.449358 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 22:57:07.458965 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 22:57:07.517657 ignition[1167]: Ignition 2.22.0 Nov 23 22:57:07.518485 ignition[1167]: Stage: kargs Nov 23 22:57:07.519457 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:07.519481 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:07.519616 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:07.529407 ignition[1167]: PUT result: OK Nov 23 22:57:07.535784 ignition[1167]: kargs: kargs passed Nov 23 22:57:07.535970 ignition[1167]: Ignition finished successfully Nov 23 22:57:07.541376 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 22:57:07.546693 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 22:57:07.599644 ignition[1174]: Ignition 2.22.0 Nov 23 22:57:07.599678 ignition[1174]: Stage: disks Nov 23 22:57:07.600284 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:07.600308 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:07.600463 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:07.603902 ignition[1174]: PUT result: OK Nov 23 22:57:07.614957 ignition[1174]: disks: disks passed Nov 23 22:57:07.615083 ignition[1174]: Ignition finished successfully Nov 23 22:57:07.617607 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 22:57:07.624434 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 22:57:07.629485 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 22:57:07.637668 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:57:07.641465 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:57:07.645222 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:57:07.651790 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 22:57:07.701806 systemd-fsck[1183]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 22:57:07.706155 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 22:57:07.714474 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 22:57:07.868154 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c70a3a7b-80c4-4387-ab29-1bf940859b86 r/w with ordered data mode. Quota mode: none. Nov 23 22:57:07.868860 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 22:57:07.873242 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 22:57:07.880489 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:57:07.887256 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 22:57:07.893628 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 22:57:07.897579 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 22:57:07.897649 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:57:07.927062 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 22:57:07.934411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 22:57:07.959141 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1202) Nov 23 22:57:07.965298 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:07.965387 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:57:07.974891 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 22:57:07.975011 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 22:57:07.977527 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:57:08.060053 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 22:57:08.074370 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Nov 23 22:57:08.084128 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 22:57:08.094351 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 22:57:08.249055 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 22:57:08.255282 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 22:57:08.263136 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 22:57:08.288350 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 22:57:08.293168 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:08.322611 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 22:57:08.343968 ignition[1315]: INFO : Ignition 2.22.0 Nov 23 22:57:08.343968 ignition[1315]: INFO : Stage: mount Nov 23 22:57:08.347769 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:08.347769 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:08.347769 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:08.356229 ignition[1315]: INFO : PUT result: OK Nov 23 22:57:08.360478 ignition[1315]: INFO : mount: mount passed Nov 23 22:57:08.366235 ignition[1315]: INFO : Ignition finished successfully Nov 23 22:57:08.363885 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 22:57:08.369938 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 22:57:08.872038 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:57:08.927149 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Nov 23 22:57:08.932133 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:57:08.932212 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:57:08.942229 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 22:57:08.942360 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 22:57:08.946244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:57:09.015305 ignition[1343]: INFO : Ignition 2.22.0 Nov 23 22:57:09.018124 ignition[1343]: INFO : Stage: files Nov 23 22:57:09.019847 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:09.022199 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:09.025056 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:09.028226 ignition[1343]: INFO : PUT result: OK Nov 23 22:57:09.034560 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Nov 23 22:57:09.039386 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 22:57:09.039386 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 22:57:09.052008 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 22:57:09.055361 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 22:57:09.059255 unknown[1343]: wrote ssh authorized keys file for user: core Nov 23 22:57:09.061941 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 22:57:09.068135 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 22:57:09.068135 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 23 22:57:09.149712 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 22:57:09.210275 systemd-networkd[1149]: eth0: Gained IPv6LL Nov 23 22:57:09.292243 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 22:57:09.296953 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 22:57:09.296953 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 22:57:09.296953 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:57:09.296953 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:57:09.296953 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:57:09.296953 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:57:09.296953 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:57:09.296953 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:57:09.332528 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:57:09.332528 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:57:09.332528 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 22:57:09.346591 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 22:57:09.346591 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 22:57:09.346591 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 23 22:57:09.779434 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 22:57:10.186739 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 22:57:10.186739 ignition[1343]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 22:57:10.195023 ignition[1343]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:57:10.202609 ignition[1343]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:57:10.202609 ignition[1343]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 22:57:10.210045 ignition[1343]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 23 22:57:10.210045 ignition[1343]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 22:57:10.210045 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:57:10.210045 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:57:10.210045 ignition[1343]: INFO : files: files passed Nov 23 22:57:10.210045 ignition[1343]: INFO : Ignition finished successfully Nov 23 22:57:10.230726 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 22:57:10.237620 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 22:57:10.249191 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 22:57:10.266569 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 22:57:10.267404 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 22:57:10.287703 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:57:10.287703 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:57:10.296278 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:57:10.299683 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:57:10.307591 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 22:57:10.314202 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 22:57:10.406516 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 22:57:10.408380 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 22:57:10.415573 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 22:57:10.418847 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 22:57:10.426420 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 22:57:10.428389 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 22:57:10.469535 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:57:10.476508 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 22:57:10.512008 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:57:10.515216 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:57:10.523413 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 22:57:10.523793 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 22:57:10.524027 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:57:10.525699 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 22:57:10.528408 systemd[1]: Stopped target basic.target - Basic System. Nov 23 22:57:10.529924 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 22:57:10.530634 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:57:10.531002 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 22:57:10.531771 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:57:10.532194 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 22:57:10.532515 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:57:10.532912 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 22:57:10.533670 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 22:57:10.534290 systemd[1]: Stopped target swap.target - Swaps. Nov 23 22:57:10.538904 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 22:57:10.540638 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:57:10.541742 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:57:10.544429 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:57:10.545115 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 22:57:10.559944 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:57:10.560283 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 22:57:10.560557 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 22:57:10.571582 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 22:57:10.572296 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:57:10.580388 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 22:57:10.580674 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 22:57:10.589873 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 22:57:10.600709 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 22:57:10.601004 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:57:10.607716 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 22:57:10.620396 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 22:57:10.620701 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:57:10.639480 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 22:57:10.639718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:57:10.707027 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 22:57:10.711190 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 22:57:10.726909 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 22:57:10.734994 ignition[1397]: INFO : Ignition 2.22.0 Nov 23 22:57:10.734994 ignition[1397]: INFO : Stage: umount Nov 23 22:57:10.739522 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:57:10.739522 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:57:10.739522 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:57:10.739522 ignition[1397]: INFO : PUT result: OK Nov 23 22:57:10.758721 ignition[1397]: INFO : umount: umount passed Nov 23 22:57:10.758721 ignition[1397]: INFO : Ignition finished successfully Nov 23 22:57:10.780951 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 22:57:10.781201 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 22:57:10.790065 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 22:57:10.792667 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 22:57:10.800621 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 22:57:10.800744 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 22:57:10.808314 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 22:57:10.808430 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 22:57:10.812201 systemd[1]: Stopped target network.target - Network. Nov 23 22:57:10.819970 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 22:57:10.820140 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:57:10.835351 systemd[1]: Stopped target paths.target - Path Units. Nov 23 22:57:10.838506 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 22:57:10.843325 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:57:10.847037 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 22:57:10.849682 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 22:57:10.861510 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 22:57:10.861602 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:57:10.867760 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 22:57:10.867839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:57:10.871741 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 22:57:10.871914 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 22:57:10.882605 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 22:57:10.882717 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 22:57:10.886464 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 22:57:10.894857 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 22:57:10.897899 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 22:57:10.900611 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 22:57:10.910862 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 22:57:10.911054 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 22:57:10.922602 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 22:57:10.923057 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 22:57:10.923372 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 22:57:10.935859 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 22:57:10.938839 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 22:57:10.944856 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 22:57:10.944941 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:57:10.948623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 22:57:10.948732 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 22:57:10.991406 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 22:57:10.996105 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 22:57:10.996235 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:57:10.996419 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 22:57:10.996502 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:57:11.006159 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 22:57:11.006270 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 22:57:11.020223 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 22:57:11.020346 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:57:11.033292 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:57:11.037906 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 22:57:11.040885 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:57:11.073394 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 22:57:11.080898 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:57:11.084705 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 22:57:11.084793 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 22:57:11.092139 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 22:57:11.092260 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:57:11.095221 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 22:57:11.095334 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:57:11.098865 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 22:57:11.098990 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 22:57:11.108270 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 22:57:11.108388 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:57:11.121852 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 22:57:11.146061 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 22:57:11.146235 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:57:11.150475 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 22:57:11.150576 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:57:11.159576 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:57:11.159675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:57:11.172777 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 22:57:11.172895 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 22:57:11.172985 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:57:11.180487 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 22:57:11.182706 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 22:57:11.205328 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 22:57:11.205816 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 22:57:11.214815 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 22:57:11.221950 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 22:57:11.263848 systemd[1]: Switching root. Nov 23 22:57:11.329008 systemd-journald[259]: Journal stopped Nov 23 22:57:13.513019 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Nov 23 22:57:13.518677 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 22:57:13.518738 kernel: SELinux: policy capability open_perms=1 Nov 23 22:57:13.518772 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 22:57:13.518807 kernel: SELinux: policy capability always_check_network=0 Nov 23 22:57:13.518839 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 22:57:13.518878 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 22:57:13.518912 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 22:57:13.518949 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 22:57:13.518978 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 22:57:13.519010 kernel: audit: type=1403 audit(1763938631.596:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 22:57:13.519059 systemd[1]: Successfully loaded SELinux policy in 87.382ms. Nov 23 22:57:13.519154 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.768ms. Nov 23 22:57:13.519199 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:57:13.519239 systemd[1]: Detected virtualization amazon. Nov 23 22:57:13.519272 systemd[1]: Detected architecture arm64. Nov 23 22:57:13.519305 systemd[1]: Detected first boot. Nov 23 22:57:13.519336 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:57:13.519381 zram_generator::config[1441]: No configuration found. Nov 23 22:57:13.519417 kernel: NET: Registered PF_VSOCK protocol family Nov 23 22:57:13.519449 systemd[1]: Populated /etc with preset unit settings. Nov 23 22:57:13.519483 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 22:57:13.519515 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 22:57:13.519552 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 22:57:13.519587 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 22:57:13.519623 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 22:57:13.519658 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 22:57:13.519690 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 22:57:13.519724 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 22:57:13.519762 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 22:57:13.519795 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 22:57:13.519839 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 22:57:13.519870 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 22:57:13.519899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:57:13.519935 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:57:13.519969 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 22:57:13.520008 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 22:57:13.520038 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 22:57:13.520070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:57:13.525208 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 23 22:57:13.525268 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:57:13.525299 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:57:13.525331 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 22:57:13.525367 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 22:57:13.525401 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 22:57:13.525431 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 22:57:13.525459 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:57:13.525492 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:57:13.525529 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:57:13.525562 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:57:13.525593 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 22:57:13.525628 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 22:57:13.525658 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 22:57:13.525690 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:57:13.525720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:57:13.525749 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:57:13.525781 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 22:57:13.525817 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 22:57:13.525849 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 22:57:13.525882 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 22:57:13.525921 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 22:57:13.525950 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 22:57:13.525982 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 22:57:13.526182 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 22:57:13.526232 systemd[1]: Reached target machines.target - Containers. Nov 23 22:57:13.526269 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 22:57:13.526301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:57:13.526331 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:57:13.526365 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 22:57:13.526397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:57:13.526434 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:57:13.526464 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:57:13.526495 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 22:57:13.526524 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:57:13.526560 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 22:57:13.526590 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 22:57:13.526622 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 22:57:13.526655 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 22:57:13.526686 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 22:57:13.526720 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:57:13.526750 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:57:13.526785 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:57:13.526828 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:57:13.526860 kernel: fuse: init (API version 7.41) Nov 23 22:57:13.526896 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 22:57:13.526928 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 22:57:13.526960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:57:13.527002 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 22:57:13.527034 systemd[1]: Stopped verity-setup.service. Nov 23 22:57:13.527065 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 22:57:13.530213 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 22:57:13.530298 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 22:57:13.530331 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 22:57:13.530368 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 22:57:13.530403 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 22:57:13.530433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:57:13.530467 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 22:57:13.530497 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 22:57:13.530533 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:57:13.530562 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:57:13.530592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:57:13.530622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:57:13.530658 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 22:57:13.530687 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 22:57:13.530716 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:57:13.530747 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 22:57:13.530843 systemd-journald[1524]: Collecting audit messages is disabled. Nov 23 22:57:13.530914 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 22:57:13.530947 kernel: loop: module loaded Nov 23 22:57:13.530983 systemd-journald[1524]: Journal started Nov 23 22:57:13.531032 systemd-journald[1524]: Runtime Journal (/run/log/journal/ec2ff39cf7b9c4ec59b0807414df6d96) is 8M, max 75.3M, 67.3M free. Nov 23 22:57:12.803426 systemd[1]: Queued start job for default target multi-user.target. Nov 23 22:57:12.819960 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 23 22:57:13.545838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:57:12.820980 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 22:57:13.574079 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:57:13.557564 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:57:13.559236 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:57:13.565905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 22:57:13.571978 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 22:57:13.578196 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:57:13.584274 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 22:57:13.638309 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 22:57:13.652342 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:57:13.656909 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 22:57:13.656974 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:57:13.662488 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 22:57:13.670155 kernel: ACPI: bus type drm_connector registered Nov 23 22:57:13.672566 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 22:57:13.677958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:57:13.685469 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 22:57:13.697982 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 22:57:13.704338 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:57:13.713539 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 22:57:13.719306 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:57:13.725805 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 22:57:13.736528 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 22:57:13.743838 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:57:13.746072 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:57:13.753267 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:57:13.774763 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 22:57:13.787847 systemd-journald[1524]: Time spent on flushing to /var/log/journal/ec2ff39cf7b9c4ec59b0807414df6d96 is 87.329ms for 925 entries. Nov 23 22:57:13.787847 systemd-journald[1524]: System Journal (/var/log/journal/ec2ff39cf7b9c4ec59b0807414df6d96) is 8M, max 195.6M, 187.6M free. Nov 23 22:57:13.911504 systemd-journald[1524]: Received client request to flush runtime journal. Nov 23 22:57:13.911651 kernel: loop0: detected capacity change from 0 to 119840 Nov 23 22:57:13.845224 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 22:57:13.917265 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 22:57:13.851538 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 22:57:13.863756 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 22:57:13.921215 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 22:57:13.956990 kernel: loop1: detected capacity change from 0 to 61264 Nov 23 22:57:13.974504 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 22:57:13.981321 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 22:57:14.020580 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 22:57:14.031078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:57:14.111792 kernel: loop2: detected capacity change from 0 to 207008 Nov 23 22:57:14.113144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:57:14.140623 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Nov 23 22:57:14.141346 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Nov 23 22:57:14.160788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:57:14.328726 kernel: loop3: detected capacity change from 0 to 100632 Nov 23 22:57:14.400753 kernel: loop4: detected capacity change from 0 to 119840 Nov 23 22:57:14.449158 kernel: loop5: detected capacity change from 0 to 61264 Nov 23 22:57:14.479363 kernel: loop6: detected capacity change from 0 to 207008 Nov 23 22:57:14.529227 kernel: loop7: detected capacity change from 0 to 100632 Nov 23 22:57:14.559242 (sd-merge)[1599]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 23 22:57:14.561758 (sd-merge)[1599]: Merged extensions into '/usr'. Nov 23 22:57:14.577381 systemd[1]: Reload requested from client PID 1577 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 22:57:14.578050 systemd[1]: Reloading... Nov 23 22:57:14.719290 ldconfig[1572]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 22:57:14.781170 zram_generator::config[1623]: No configuration found. Nov 23 22:57:15.301554 systemd[1]: Reloading finished in 722 ms. Nov 23 22:57:15.322234 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 22:57:15.325909 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 22:57:15.329918 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 22:57:15.350531 systemd[1]: Starting ensure-sysext.service... Nov 23 22:57:15.358208 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:57:15.372287 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:57:15.411375 systemd[1]: Reload requested from client PID 1678 ('systemctl') (unit ensure-sysext.service)... Nov 23 22:57:15.411409 systemd[1]: Reloading... Nov 23 22:57:15.459621 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 22:57:15.459729 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 22:57:15.460423 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 22:57:15.460958 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 22:57:15.462472 systemd-udevd[1680]: Using default interface naming scheme 'v255'. Nov 23 22:57:15.468782 systemd-tmpfiles[1679]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 22:57:15.471991 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Nov 23 22:57:15.472178 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Nov 23 22:57:15.494083 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:57:15.497178 systemd-tmpfiles[1679]: Skipping /boot Nov 23 22:57:15.550198 systemd-tmpfiles[1679]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:57:15.550229 systemd-tmpfiles[1679]: Skipping /boot Nov 23 22:57:15.685142 zram_generator::config[1729]: No configuration found. Nov 23 22:57:16.075274 (udev-worker)[1715]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:57:16.407351 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 23 22:57:16.408166 systemd[1]: Reloading finished in 996 ms. Nov 23 22:57:16.472928 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:57:16.498269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:57:16.564267 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:57:16.573568 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 22:57:16.581614 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 22:57:16.591749 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:57:16.604252 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:57:16.610577 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 22:57:16.622868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:57:16.628821 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:57:16.643755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:57:16.650955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:57:16.653642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:57:16.654171 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:57:16.664590 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:57:16.664989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:57:16.665259 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:57:16.680402 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 22:57:16.697628 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:57:16.716306 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:57:16.719062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:57:16.719393 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:57:16.719776 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 22:57:16.772278 systemd[1]: Finished ensure-sysext.service. Nov 23 22:57:16.809208 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 22:57:16.848487 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 22:57:16.875139 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 22:57:16.879607 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 22:57:16.892601 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 22:57:16.903022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:57:16.903545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:57:16.906827 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:57:16.907256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:57:16.924034 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:57:16.928551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:57:16.939021 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:57:16.939694 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:57:16.943004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:57:16.949258 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:57:16.951191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:57:16.974328 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 22:57:17.006331 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 23 22:57:17.021167 augenrules[1934]: No rules Nov 23 22:57:17.025995 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:57:17.027534 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:57:17.036433 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 22:57:17.093905 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 22:57:17.186198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:57:17.189668 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 22:57:17.331082 systemd-networkd[1888]: lo: Link UP Nov 23 22:57:17.331655 systemd-networkd[1888]: lo: Gained carrier Nov 23 22:57:17.335018 systemd-networkd[1888]: Enumeration completed Nov 23 22:57:17.335457 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:57:17.337884 systemd-resolved[1889]: Positive Trust Anchors: Nov 23 22:57:17.339349 systemd-networkd[1888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:57:17.339494 systemd-networkd[1888]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:57:17.340131 systemd-resolved[1889]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:57:17.340217 systemd-resolved[1889]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:57:17.342161 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 22:57:17.345517 systemd-networkd[1888]: eth0: Link UP Nov 23 22:57:17.345997 systemd-networkd[1888]: eth0: Gained carrier Nov 23 22:57:17.346187 systemd-networkd[1888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:57:17.350564 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 22:57:17.362209 systemd-networkd[1888]: eth0: DHCPv4 address 172.31.17.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 23 22:57:17.368429 systemd-resolved[1889]: Defaulting to hostname 'linux'. Nov 23 22:57:17.374260 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:57:17.377192 systemd[1]: Reached target network.target - Network. Nov 23 22:57:17.380279 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:57:17.383319 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:57:17.386247 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 22:57:17.389215 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 22:57:17.392567 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 22:57:17.395245 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 22:57:17.398124 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 22:57:17.401010 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 22:57:17.401060 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:57:17.403252 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:57:17.408387 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 22:57:17.415812 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 22:57:17.425274 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 22:57:17.428633 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 22:57:17.431798 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 22:57:17.443403 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 22:57:17.447029 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 22:57:17.452283 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 22:57:17.455745 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 22:57:17.459382 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:57:17.462244 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:57:17.464920 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:57:17.464990 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:57:17.467733 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 22:57:17.473426 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 22:57:17.482453 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 22:57:17.490523 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 22:57:17.498454 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 22:57:17.508842 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 22:57:17.512338 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 22:57:17.516545 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 22:57:17.525607 systemd[1]: Started ntpd.service - Network Time Service. Nov 23 22:57:17.532677 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 22:57:17.545376 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 23 22:57:17.551035 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 22:57:17.559707 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 22:57:17.575650 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 22:57:17.580149 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 22:57:17.588621 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 22:57:17.591537 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 22:57:17.598567 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 22:57:17.645135 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 22:57:17.648361 jq[1976]: true Nov 23 22:57:17.668262 jq[1964]: false Nov 23 22:57:17.657452 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 22:57:17.660280 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 22:57:17.680326 jq[1980]: true Nov 23 22:57:17.714512 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 22:57:17.716216 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 22:57:17.727985 tar[1987]: linux-arm64/LICENSE Nov 23 22:57:17.731713 tar[1987]: linux-arm64/helm Nov 23 22:57:17.811119 extend-filesystems[1965]: Found /dev/nvme0n1p6 Nov 23 22:57:17.816125 (ntainerd)[2005]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 22:57:17.839044 dbus-daemon[1962]: [system] SELinux support is enabled Nov 23 22:57:17.839405 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 22:57:17.847442 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 22:57:17.847504 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 22:57:17.850569 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 22:57:17.850610 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 22:57:17.860149 update_engine[1975]: I20251123 22:57:17.843024 1975 main.cc:92] Flatcar Update Engine starting Nov 23 22:57:17.861889 extend-filesystems[1965]: Found /dev/nvme0n1p9 Nov 23 22:57:17.869029 dbus-daemon[1962]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1888 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 23 22:57:17.875277 extend-filesystems[1965]: Checking size of /dev/nvme0n1p9 Nov 23 22:57:17.885575 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 23 22:57:17.895977 update_engine[1975]: I20251123 22:57:17.895505 1975 update_check_scheduler.cc:74] Next update check in 2m47s Nov 23 22:57:17.888332 systemd[1]: Started update-engine.service - Update Engine. Nov 23 22:57:17.911562 bash[2019]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:57:17.941948 ntpd[1967]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:57:17.947126 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:57:17.947126 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:57:17.947126 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: ---------------------------------------------------- Nov 23 22:57:17.947126 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:57:17.947126 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:57:17.947126 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: corporation. Support and training for ntp-4 are Nov 23 22:57:17.947126 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: available at https://www.nwtime.org/support Nov 23 22:57:17.947126 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: ---------------------------------------------------- Nov 23 22:57:17.944274 ntpd[1967]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:57:17.944294 ntpd[1967]: ---------------------------------------------------- Nov 23 22:57:17.944311 ntpd[1967]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:57:17.944328 ntpd[1967]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:57:17.944344 ntpd[1967]: corporation. Support and training for ntp-4 are Nov 23 22:57:17.944359 ntpd[1967]: available at https://www.nwtime.org/support Nov 23 22:57:17.944377 ntpd[1967]: ---------------------------------------------------- Nov 23 22:57:17.954547 ntpd[1967]: proto: precision = 0.096 usec (-23) Nov 23 22:57:17.956531 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: proto: precision = 0.096 usec (-23) Nov 23 22:57:17.956531 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: basedate set to 2025-11-11 Nov 23 22:57:17.956531 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: gps base set to 2025-11-16 (week 2393) Nov 23 22:57:17.956531 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:57:17.956531 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:57:17.955850 ntpd[1967]: basedate set to 2025-11-11 Nov 23 22:57:17.955882 ntpd[1967]: gps base set to 2025-11-16 (week 2393) Nov 23 22:57:17.956068 ntpd[1967]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:57:17.956143 ntpd[1967]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:57:17.960915 ntpd[1967]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:57:17.964297 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:57:17.964297 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: Listen normally on 3 eth0 172.31.17.147:123 Nov 23 22:57:17.964297 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: Listen normally on 4 lo [::1]:123 Nov 23 22:57:17.964297 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: bind(21) AF_INET6 [fe80::44b:d1ff:fee9:c245%2]:123 flags 0x811 failed: Cannot assign requested address Nov 23 22:57:17.964297 ntpd[1967]: 23 Nov 22:57:17 ntpd[1967]: unable to create socket on eth0 (5) for [fe80::44b:d1ff:fee9:c245%2]:123 Nov 23 22:57:17.961011 ntpd[1967]: Listen normally on 3 eth0 172.31.17.147:123 Nov 23 22:57:17.961064 ntpd[1967]: Listen normally on 4 lo [::1]:123 Nov 23 22:57:17.961147 ntpd[1967]: bind(21) AF_INET6 [fe80::44b:d1ff:fee9:c245%2]:123 flags 0x811 failed: Cannot assign requested address Nov 23 22:57:17.961189 ntpd[1967]: unable to create socket on eth0 (5) for [fe80::44b:d1ff:fee9:c245%2]:123 Nov 23 22:57:17.965785 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 22:57:17.970111 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 22:57:17.970693 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 22:57:17.973964 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 22:57:17.984459 systemd-coredump[2031]: Process 1967 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 23 22:57:18.008122 extend-filesystems[1965]: Resized partition /dev/nvme0n1p9 Nov 23 22:57:18.011802 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 22:57:18.015862 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 23 22:57:18.026633 extend-filesystems[2037]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 22:57:18.032612 systemd[1]: Starting sshkeys.service... Nov 23 22:57:18.050636 systemd[1]: Started systemd-coredump@0-2031-0.service - Process Core Dump (PID 2031/UID 0). Nov 23 22:57:18.081147 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 23 22:57:18.091014 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 23 22:57:18.108042 coreos-metadata[1961]: Nov 23 22:57:18.106 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 23 22:57:18.116165 coreos-metadata[1961]: Nov 23 22:57:18.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 23 22:57:18.122121 coreos-metadata[1961]: Nov 23 22:57:18.121 INFO Fetch successful Nov 23 22:57:18.122121 coreos-metadata[1961]: Nov 23 22:57:18.121 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 23 22:57:18.128234 coreos-metadata[1961]: Nov 23 22:57:18.127 INFO Fetch successful Nov 23 22:57:18.128234 coreos-metadata[1961]: Nov 23 22:57:18.127 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 23 22:57:18.130610 coreos-metadata[1961]: Nov 23 22:57:18.130 INFO Fetch successful Nov 23 22:57:18.130610 coreos-metadata[1961]: Nov 23 22:57:18.130 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 23 22:57:18.137547 coreos-metadata[1961]: Nov 23 22:57:18.136 INFO Fetch successful Nov 23 22:57:18.137547 coreos-metadata[1961]: Nov 23 22:57:18.137 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 23 22:57:18.144848 coreos-metadata[1961]: Nov 23 22:57:18.142 INFO Fetch failed with 404: resource not found Nov 23 22:57:18.144848 coreos-metadata[1961]: Nov 23 22:57:18.142 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 23 22:57:18.148510 coreos-metadata[1961]: Nov 23 22:57:18.148 INFO Fetch successful Nov 23 22:57:18.148804 coreos-metadata[1961]: Nov 23 22:57:18.148 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 23 22:57:18.154379 coreos-metadata[1961]: Nov 23 22:57:18.154 INFO Fetch successful Nov 23 22:57:18.154704 coreos-metadata[1961]: Nov 23 22:57:18.154 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 23 22:57:18.161394 coreos-metadata[1961]: Nov 23 22:57:18.161 INFO Fetch successful Nov 23 22:57:18.161709 coreos-metadata[1961]: Nov 23 22:57:18.161 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 23 22:57:18.171154 coreos-metadata[1961]: Nov 23 22:57:18.164 INFO Fetch successful Nov 23 22:57:18.171154 coreos-metadata[1961]: Nov 23 22:57:18.165 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 23 22:57:18.171154 coreos-metadata[1961]: Nov 23 22:57:18.167 INFO Fetch successful Nov 23 22:57:18.185014 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 23 22:57:18.201584 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 23 22:57:18.341521 systemd-logind[1974]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 22:57:18.341576 systemd-logind[1974]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 23 22:57:18.347570 systemd-logind[1974]: New seat seat0. Nov 23 22:57:18.349392 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 22:57:18.354374 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 22:57:18.356109 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 22:57:18.380585 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 23 22:57:18.407813 extend-filesystems[2037]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 23 22:57:18.407813 extend-filesystems[2037]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 23 22:57:18.407813 extend-filesystems[2037]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 23 22:57:18.430227 extend-filesystems[1965]: Resized filesystem in /dev/nvme0n1p9 Nov 23 22:57:18.414148 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 22:57:18.416462 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 22:57:18.529720 coreos-metadata[2052]: Nov 23 22:57:18.529 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 23 22:57:18.531154 coreos-metadata[2052]: Nov 23 22:57:18.531 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 23 22:57:18.532466 coreos-metadata[2052]: Nov 23 22:57:18.531 INFO Fetch successful Nov 23 22:57:18.532466 coreos-metadata[2052]: Nov 23 22:57:18.532 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 23 22:57:18.533993 coreos-metadata[2052]: Nov 23 22:57:18.533 INFO Fetch successful Nov 23 22:57:18.538151 unknown[2052]: wrote ssh authorized keys file for user: core Nov 23 22:57:18.579078 locksmithd[2025]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 22:57:18.588598 update-ssh-keys[2093]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:57:18.589990 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 23 22:57:18.601555 systemd[1]: Finished sshkeys.service. Nov 23 22:57:18.894967 containerd[2005]: time="2025-11-23T22:57:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 22:57:18.898366 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 23 22:57:18.904643 containerd[2005]: time="2025-11-23T22:57:18.904008025Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 22:57:18.914612 dbus-daemon[1962]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 23 22:57:18.915819 dbus-daemon[1962]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2023 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 23 22:57:18.925498 systemd-coredump[2040]: Process 1967 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1967: #0 0x0000aaaada2f0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaada29fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaada2a0240 n/a (ntpd + 0x10240) #3 0x0000aaaada29be14 n/a (ntpd + 0xbe14) #4 0x0000aaaada29d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaada2a5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaada29738c n/a (ntpd + 0x738c) #7 0x0000ffff9b702034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff9b702118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaada2973f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Nov 23 22:57:18.930471 systemd[1]: Starting polkit.service - Authorization Manager... Nov 23 22:57:18.940080 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 23 22:57:18.940444 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 23 22:57:18.951659 systemd[1]: systemd-coredump@0-2031-0.service: Deactivated successfully. Nov 23 22:57:19.026865 containerd[2005]: time="2025-11-23T22:57:19.026770102Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.088µs" Nov 23 22:57:19.026865 containerd[2005]: time="2025-11-23T22:57:19.026847298Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 22:57:19.027130 containerd[2005]: time="2025-11-23T22:57:19.026891770Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 22:57:19.033474 containerd[2005]: time="2025-11-23T22:57:19.033339838Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 22:57:19.033631 containerd[2005]: time="2025-11-23T22:57:19.033497182Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 22:57:19.034123 containerd[2005]: time="2025-11-23T22:57:19.034016938Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:57:19.036412 containerd[2005]: time="2025-11-23T22:57:19.036331654Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:57:19.036547 containerd[2005]: time="2025-11-23T22:57:19.036417430Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:57:19.041925 containerd[2005]: time="2025-11-23T22:57:19.039150838Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:57:19.042072 containerd[2005]: time="2025-11-23T22:57:19.041917798Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:57:19.042072 containerd[2005]: time="2025-11-23T22:57:19.042019606Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:57:19.042212 containerd[2005]: time="2025-11-23T22:57:19.042070006Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 22:57:19.042442 containerd[2005]: time="2025-11-23T22:57:19.042390946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 22:57:19.046157 containerd[2005]: time="2025-11-23T22:57:19.046061854Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:57:19.046289 containerd[2005]: time="2025-11-23T22:57:19.046214218Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:57:19.046289 containerd[2005]: time="2025-11-23T22:57:19.046248958Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 22:57:19.046668 containerd[2005]: time="2025-11-23T22:57:19.046355194Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 22:57:19.051177 containerd[2005]: time="2025-11-23T22:57:19.049447450Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 22:57:19.051177 containerd[2005]: time="2025-11-23T22:57:19.051047614Z" level=info msg="metadata content store policy set" policy=shared Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.061605370Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.061758442Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.061798390Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.061828954Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.061865914Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.061894378Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.061941838Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.061999858Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.062047234Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.062078374Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.062156518Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.062194318Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.062467210Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 22:57:19.067132 containerd[2005]: time="2025-11-23T22:57:19.062530294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 22:57:19.067879 containerd[2005]: time="2025-11-23T22:57:19.062570338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 22:57:19.067879 containerd[2005]: time="2025-11-23T22:57:19.062624086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 22:57:19.067879 containerd[2005]: time="2025-11-23T22:57:19.062655082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 22:57:19.067879 containerd[2005]: time="2025-11-23T22:57:19.062683102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 22:57:19.067879 containerd[2005]: time="2025-11-23T22:57:19.062732746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 22:57:19.067879 containerd[2005]: time="2025-11-23T22:57:19.062761666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 22:57:19.067879 containerd[2005]: time="2025-11-23T22:57:19.062790586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 22:57:19.067879 containerd[2005]: time="2025-11-23T22:57:19.062817490Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 22:57:19.067879 containerd[2005]: time="2025-11-23T22:57:19.062845270Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 22:57:19.079121 containerd[2005]: time="2025-11-23T22:57:19.069556018Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 22:57:19.079121 containerd[2005]: time="2025-11-23T22:57:19.069626278Z" level=info msg="Start snapshots syncer" Nov 23 22:57:19.079121 containerd[2005]: time="2025-11-23T22:57:19.069677626Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 22:57:19.070966 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074239810Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074371246Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074477614Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074734282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074792218Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074843950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074876626Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074907814Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074937094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.074967382Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.075031522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 22:57:19.079465 containerd[2005]: time="2025-11-23T22:57:19.075063766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080197426Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080331634Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080599138Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080626750Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080652970Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080675242Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080700046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080725642Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080898694Z" level=info msg="runtime interface created" Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080916274Z" level=info msg="created NRI interface" Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080939470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.080970478Z" level=info msg="Connect containerd service" Nov 23 22:57:19.088508 containerd[2005]: time="2025-11-23T22:57:19.081035038Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 22:57:19.090260 systemd[1]: Started ntpd.service - Network Time Service. Nov 23 22:57:19.097033 containerd[2005]: time="2025-11-23T22:57:19.096964714Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 22:57:19.195208 systemd-networkd[1888]: eth0: Gained IPv6LL Nov 23 22:57:19.203224 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 22:57:19.206953 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 22:57:19.219712 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 23 22:57:19.234010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:19.245233 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 22:57:19.264381 ntpd[2161]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:57:19.268718 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:57:19.268718 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:57:19.268718 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: ---------------------------------------------------- Nov 23 22:57:19.268718 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:57:19.268718 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:57:19.268718 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: corporation. Support and training for ntp-4 are Nov 23 22:57:19.268718 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: available at https://www.nwtime.org/support Nov 23 22:57:19.268718 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: ---------------------------------------------------- Nov 23 22:57:19.268718 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: proto: precision = 0.096 usec (-23) Nov 23 22:57:19.264503 ntpd[2161]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:57:19.264523 ntpd[2161]: ---------------------------------------------------- Nov 23 22:57:19.264540 ntpd[2161]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:57:19.264556 ntpd[2161]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:57:19.264573 ntpd[2161]: corporation. Support and training for ntp-4 are Nov 23 22:57:19.264589 ntpd[2161]: available at https://www.nwtime.org/support Nov 23 22:57:19.285382 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: basedate set to 2025-11-11 Nov 23 22:57:19.285382 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: gps base set to 2025-11-16 (week 2393) Nov 23 22:57:19.285382 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:57:19.285382 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:57:19.285382 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:57:19.285382 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: Listen normally on 3 eth0 172.31.17.147:123 Nov 23 22:57:19.285382 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: Listen normally on 4 lo [::1]:123 Nov 23 22:57:19.285382 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: Listen normally on 5 eth0 [fe80::44b:d1ff:fee9:c245%2]:123 Nov 23 22:57:19.285382 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: Listening on routing socket on fd #22 for interface updates Nov 23 22:57:19.264615 ntpd[2161]: ---------------------------------------------------- Nov 23 22:57:19.265895 ntpd[2161]: proto: precision = 0.096 usec (-23) Nov 23 22:57:19.274333 ntpd[2161]: basedate set to 2025-11-11 Nov 23 22:57:19.274364 ntpd[2161]: gps base set to 2025-11-16 (week 2393) Nov 23 22:57:19.274516 ntpd[2161]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:57:19.274561 ntpd[2161]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:57:19.274842 ntpd[2161]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:57:19.274887 ntpd[2161]: Listen normally on 3 eth0 172.31.17.147:123 Nov 23 22:57:19.274931 ntpd[2161]: Listen normally on 4 lo [::1]:123 Nov 23 22:57:19.274975 ntpd[2161]: Listen normally on 5 eth0 [fe80::44b:d1ff:fee9:c245%2]:123 Nov 23 22:57:19.275017 ntpd[2161]: Listening on routing socket on fd #22 for interface updates Nov 23 22:57:19.301466 ntpd[2161]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:57:19.301531 ntpd[2161]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:57:19.301711 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:57:19.301711 ntpd[2161]: 23 Nov 22:57:19 ntpd[2161]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:57:19.471399 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 22:57:19.562732 polkitd[2139]: Started polkitd version 126 Nov 23 22:57:19.579055 containerd[2005]: time="2025-11-23T22:57:19.578977212Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 22:57:19.580648 amazon-ssm-agent[2173]: Initializing new seelog logger Nov 23 22:57:19.582146 containerd[2005]: time="2025-11-23T22:57:19.581615124Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 22:57:19.582146 containerd[2005]: time="2025-11-23T22:57:19.581681076Z" level=info msg="Start subscribing containerd event" Nov 23 22:57:19.582146 containerd[2005]: time="2025-11-23T22:57:19.581842764Z" level=info msg="Start recovering state" Nov 23 22:57:19.582146 containerd[2005]: time="2025-11-23T22:57:19.582019320Z" level=info msg="Start event monitor" Nov 23 22:57:19.582146 containerd[2005]: time="2025-11-23T22:57:19.582045636Z" level=info msg="Start cni network conf syncer for default" Nov 23 22:57:19.582146 containerd[2005]: time="2025-11-23T22:57:19.582064176Z" level=info msg="Start streaming server" Nov 23 22:57:19.582886 containerd[2005]: time="2025-11-23T22:57:19.582083232Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 22:57:19.582886 containerd[2005]: time="2025-11-23T22:57:19.582354972Z" level=info msg="runtime interface starting up..." Nov 23 22:57:19.582886 containerd[2005]: time="2025-11-23T22:57:19.582371520Z" level=info msg="starting plugins..." Nov 23 22:57:19.582886 containerd[2005]: time="2025-11-23T22:57:19.582402612Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 22:57:19.583264 containerd[2005]: time="2025-11-23T22:57:19.583226952Z" level=info msg="containerd successfully booted in 0.691629s" Nov 23 22:57:19.583263 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 22:57:19.588187 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Nov 23 22:57:19.588688 amazon-ssm-agent[2173]: 2025/11/23 22:57:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:19.588775 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:19.590334 amazon-ssm-agent[2173]: 2025/11/23 22:57:19 processing appconfig overrides Nov 23 22:57:19.594382 amazon-ssm-agent[2173]: 2025/11/23 22:57:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:19.594382 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:19.594531 amazon-ssm-agent[2173]: 2025/11/23 22:57:19 processing appconfig overrides Nov 23 22:57:19.594754 amazon-ssm-agent[2173]: 2025/11/23 22:57:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:19.594754 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:19.594882 amazon-ssm-agent[2173]: 2025/11/23 22:57:19 processing appconfig overrides Nov 23 22:57:19.600554 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.5942 INFO Proxy environment variables: Nov 23 22:57:19.603121 amazon-ssm-agent[2173]: 2025/11/23 22:57:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:19.603121 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:19.603413 amazon-ssm-agent[2173]: 2025/11/23 22:57:19 processing appconfig overrides Nov 23 22:57:19.606605 polkitd[2139]: Loading rules from directory /etc/polkit-1/rules.d Nov 23 22:57:19.610171 polkitd[2139]: Loading rules from directory /run/polkit-1/rules.d Nov 23 22:57:19.610276 polkitd[2139]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 23 22:57:19.610914 polkitd[2139]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 23 22:57:19.610984 polkitd[2139]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 23 22:57:19.611069 polkitd[2139]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 23 22:57:19.624773 polkitd[2139]: Finished loading, compiling and executing 2 rules Nov 23 22:57:19.625722 systemd[1]: Started polkit.service - Authorization Manager. Nov 23 22:57:19.635430 dbus-daemon[1962]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 23 22:57:19.640074 polkitd[2139]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 23 22:57:19.703130 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.5942 INFO https_proxy: Nov 23 22:57:19.709661 systemd-hostnamed[2023]: Hostname set to (transient) Nov 23 22:57:19.712826 systemd-resolved[1889]: System hostname changed to 'ip-172-31-17-147'. Nov 23 22:57:19.802194 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.5942 INFO http_proxy: Nov 23 22:57:19.899303 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.5942 INFO no_proxy: Nov 23 22:57:19.999109 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.5945 INFO Checking if agent identity type OnPrem can be assumed Nov 23 22:57:20.096176 tar[1987]: linux-arm64/README.md Nov 23 22:57:20.097037 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.5945 INFO Checking if agent identity type EC2 can be assumed Nov 23 22:57:20.137816 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 22:57:20.196359 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.6888 INFO Agent will take identity from EC2 Nov 23 22:57:20.296016 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.6914 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 23 22:57:20.309551 amazon-ssm-agent[2173]: 2025/11/23 22:57:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:20.310132 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:57:20.311197 amazon-ssm-agent[2173]: 2025/11/23 22:57:20 processing appconfig overrides Nov 23 22:57:20.348156 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.6919 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 23 22:57:20.349198 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.6919 INFO [amazon-ssm-agent] Starting Core Agent Nov 23 22:57:20.349198 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.6919 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 23 22:57:20.349198 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.6919 INFO [Registrar] Starting registrar module Nov 23 22:57:20.349198 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.6952 INFO [EC2Identity] Checking disk for registration info Nov 23 22:57:20.349198 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.6953 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 23 22:57:20.349530 amazon-ssm-agent[2173]: 2025-11-23 22:57:19.6953 INFO [EC2Identity] Generating registration keypair Nov 23 22:57:20.349530 amazon-ssm-agent[2173]: 2025-11-23 22:57:20.2658 INFO [EC2Identity] Checking write access before registering Nov 23 22:57:20.349530 amazon-ssm-agent[2173]: 2025-11-23 22:57:20.2666 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 23 22:57:20.349770 amazon-ssm-agent[2173]: 2025-11-23 22:57:20.3092 INFO [EC2Identity] EC2 registration was successful. Nov 23 22:57:20.349770 amazon-ssm-agent[2173]: 2025-11-23 22:57:20.3093 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 23 22:57:20.349770 amazon-ssm-agent[2173]: 2025-11-23 22:57:20.3094 INFO [CredentialRefresher] credentialRefresher has started Nov 23 22:57:20.349982 amazon-ssm-agent[2173]: 2025-11-23 22:57:20.3094 INFO [CredentialRefresher] Starting credentials refresher loop Nov 23 22:57:20.349982 amazon-ssm-agent[2173]: 2025-11-23 22:57:20.3476 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 23 22:57:20.349982 amazon-ssm-agent[2173]: 2025-11-23 22:57:20.3480 INFO [CredentialRefresher] Credentials ready Nov 23 22:57:20.394748 amazon-ssm-agent[2173]: 2025-11-23 22:57:20.3501 INFO [CredentialRefresher] Next credential rotation will be in 29.9999578418 minutes Nov 23 22:57:21.190152 sshd_keygen[2009]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 22:57:21.194387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:21.217319 (kubelet)[2225]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:57:21.240260 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 22:57:21.247926 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 22:57:21.257507 systemd[1]: Started sshd@0-172.31.17.147:22-139.178.68.195:48540.service - OpenSSH per-connection server daemon (139.178.68.195:48540). Nov 23 22:57:21.293752 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 22:57:21.298248 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 22:57:21.305683 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 22:57:21.367034 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 22:57:21.376597 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 22:57:21.385198 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 23 22:57:21.391047 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 22:57:21.397180 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 22:57:21.403139 systemd[1]: Startup finished in 3.760s (kernel) + 8.872s (initrd) + 9.895s (userspace) = 22.528s. Nov 23 22:57:21.449556 amazon-ssm-agent[2173]: 2025-11-23 22:57:21.4362 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 23 22:57:21.543147 sshd[2234]: Accepted publickey for core from 139.178.68.195 port 48540 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:21.548843 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:21.553788 amazon-ssm-agent[2173]: 2025-11-23 22:57:21.4574 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2249) started Nov 23 22:57:21.564939 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 22:57:21.569221 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 22:57:21.608498 systemd-logind[1974]: New session 1 of user core. Nov 23 22:57:21.629739 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 22:57:21.639774 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 22:57:21.653321 amazon-ssm-agent[2173]: 2025-11-23 22:57:21.4574 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 23 22:57:21.668847 (systemd)[2261]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 22:57:21.678683 systemd-logind[1974]: New session c1 of user core. Nov 23 22:57:22.051765 systemd[2261]: Queued start job for default target default.target. Nov 23 22:57:22.059323 systemd[2261]: Created slice app.slice - User Application Slice. Nov 23 22:57:22.059600 systemd[2261]: Reached target paths.target - Paths. Nov 23 22:57:22.059698 systemd[2261]: Reached target timers.target - Timers. Nov 23 22:57:22.062731 systemd[2261]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 22:57:22.162810 systemd[2261]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 22:57:22.163052 systemd[2261]: Reached target sockets.target - Sockets. Nov 23 22:57:22.163199 systemd[2261]: Reached target basic.target - Basic System. Nov 23 22:57:22.163282 systemd[2261]: Reached target default.target - Main User Target. Nov 23 22:57:22.163340 systemd[2261]: Startup finished in 462ms. Nov 23 22:57:22.163558 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 22:57:22.175373 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 22:57:22.332717 systemd[1]: Started sshd@1-172.31.17.147:22-139.178.68.195:52504.service - OpenSSH per-connection server daemon (139.178.68.195:52504). Nov 23 22:57:22.441622 kubelet[2225]: E1123 22:57:22.441532 2225 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:57:22.446699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:57:22.447241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:57:22.449217 systemd[1]: kubelet.service: Consumed 1.413s CPU time, 257.6M memory peak. Nov 23 22:57:22.540833 sshd[2281]: Accepted publickey for core from 139.178.68.195 port 52504 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:22.544046 sshd-session[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:22.552115 systemd-logind[1974]: New session 2 of user core. Nov 23 22:57:22.564355 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 22:57:22.689197 sshd[2285]: Connection closed by 139.178.68.195 port 52504 Nov 23 22:57:22.690363 sshd-session[2281]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:22.696990 systemd[1]: sshd@1-172.31.17.147:22-139.178.68.195:52504.service: Deactivated successfully. Nov 23 22:57:22.700023 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 22:57:22.705176 systemd-logind[1974]: Session 2 logged out. Waiting for processes to exit. Nov 23 22:57:22.708203 systemd-logind[1974]: Removed session 2. Nov 23 22:57:22.727507 systemd[1]: Started sshd@2-172.31.17.147:22-139.178.68.195:52510.service - OpenSSH per-connection server daemon (139.178.68.195:52510). Nov 23 22:57:22.921666 sshd[2291]: Accepted publickey for core from 139.178.68.195 port 52510 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:22.924278 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:22.935856 systemd-logind[1974]: New session 3 of user core. Nov 23 22:57:22.948434 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 22:57:23.067949 sshd[2294]: Connection closed by 139.178.68.195 port 52510 Nov 23 22:57:23.067802 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:23.075296 systemd[1]: sshd@2-172.31.17.147:22-139.178.68.195:52510.service: Deactivated successfully. Nov 23 22:57:23.079671 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 22:57:23.081839 systemd-logind[1974]: Session 3 logged out. Waiting for processes to exit. Nov 23 22:57:23.084908 systemd-logind[1974]: Removed session 3. Nov 23 22:57:23.102826 systemd[1]: Started sshd@3-172.31.17.147:22-139.178.68.195:52520.service - OpenSSH per-connection server daemon (139.178.68.195:52520). Nov 23 22:57:23.302165 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 52520 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:23.305076 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:23.313190 systemd-logind[1974]: New session 4 of user core. Nov 23 22:57:23.322351 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 22:57:23.449208 sshd[2303]: Connection closed by 139.178.68.195 port 52520 Nov 23 22:57:23.449679 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:23.456693 systemd[1]: sshd@3-172.31.17.147:22-139.178.68.195:52520.service: Deactivated successfully. Nov 23 22:57:23.459943 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 22:57:23.462538 systemd-logind[1974]: Session 4 logged out. Waiting for processes to exit. Nov 23 22:57:23.465312 systemd-logind[1974]: Removed session 4. Nov 23 22:57:23.488738 systemd[1]: Started sshd@4-172.31.17.147:22-139.178.68.195:52524.service - OpenSSH per-connection server daemon (139.178.68.195:52524). Nov 23 22:57:23.688439 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 52524 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:23.690965 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:23.698940 systemd-logind[1974]: New session 5 of user core. Nov 23 22:57:23.710347 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 22:57:23.830016 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 22:57:23.830652 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:57:23.849467 sudo[2313]: pam_unix(sudo:session): session closed for user root Nov 23 22:57:23.873174 sshd[2312]: Connection closed by 139.178.68.195 port 52524 Nov 23 22:57:23.874247 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:23.881532 systemd-logind[1974]: Session 5 logged out. Waiting for processes to exit. Nov 23 22:57:23.882475 systemd[1]: sshd@4-172.31.17.147:22-139.178.68.195:52524.service: Deactivated successfully. Nov 23 22:57:23.885773 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 22:57:23.891439 systemd-logind[1974]: Removed session 5. Nov 23 22:57:23.911483 systemd[1]: Started sshd@5-172.31.17.147:22-139.178.68.195:52530.service - OpenSSH per-connection server daemon (139.178.68.195:52530). Nov 23 22:57:24.116261 sshd[2319]: Accepted publickey for core from 139.178.68.195 port 52530 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:24.119900 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:24.129697 systemd-logind[1974]: New session 6 of user core. Nov 23 22:57:24.139441 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 22:57:24.245612 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 22:57:24.246382 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:57:24.265630 sudo[2324]: pam_unix(sudo:session): session closed for user root Nov 23 22:57:24.276715 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 22:57:24.277439 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:57:24.294462 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:57:24.370929 augenrules[2346]: No rules Nov 23 22:57:24.373574 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:57:24.375201 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:57:24.378302 sudo[2323]: pam_unix(sudo:session): session closed for user root Nov 23 22:57:24.401502 sshd[2322]: Connection closed by 139.178.68.195 port 52530 Nov 23 22:57:24.402319 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Nov 23 22:57:24.409850 systemd[1]: sshd@5-172.31.17.147:22-139.178.68.195:52530.service: Deactivated successfully. Nov 23 22:57:24.413988 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 22:57:24.417252 systemd-logind[1974]: Session 6 logged out. Waiting for processes to exit. Nov 23 22:57:24.420014 systemd-logind[1974]: Removed session 6. Nov 23 22:57:24.440539 systemd[1]: Started sshd@6-172.31.17.147:22-139.178.68.195:52538.service - OpenSSH per-connection server daemon (139.178.68.195:52538). Nov 23 22:57:24.638299 sshd[2355]: Accepted publickey for core from 139.178.68.195 port 52538 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:57:24.640579 sshd-session[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:57:24.650801 systemd-logind[1974]: New session 7 of user core. Nov 23 22:57:24.657425 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 22:57:24.762859 sudo[2359]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 22:57:24.763548 sudo[2359]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:57:25.315060 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 22:57:25.331080 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 22:57:25.707661 dockerd[2377]: time="2025-11-23T22:57:25.707282875Z" level=info msg="Starting up" Nov 23 22:57:25.713896 dockerd[2377]: time="2025-11-23T22:57:25.713805343Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 22:57:25.737653 dockerd[2377]: time="2025-11-23T22:57:25.737568103Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 22:57:25.789299 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3254746843-merged.mount: Deactivated successfully. Nov 23 22:57:25.817904 systemd[1]: var-lib-docker-metacopy\x2dcheck28794637-merged.mount: Deactivated successfully. Nov 23 22:57:25.836692 dockerd[2377]: time="2025-11-23T22:57:25.836409235Z" level=info msg="Loading containers: start." Nov 23 22:57:25.855444 kernel: Initializing XFRM netlink socket Nov 23 22:57:26.215557 (udev-worker)[2399]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:57:26.133840 systemd-resolved[1889]: Clock change detected. Flushing caches. Nov 23 22:57:26.145602 systemd-journald[1524]: Time jumped backwards, rotating. Nov 23 22:57:26.174397 systemd-networkd[1888]: docker0: Link UP Nov 23 22:57:26.188983 dockerd[2377]: time="2025-11-23T22:57:26.188801564Z" level=info msg="Loading containers: done." Nov 23 22:57:26.215475 dockerd[2377]: time="2025-11-23T22:57:26.215400776Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 22:57:26.215609 dockerd[2377]: time="2025-11-23T22:57:26.215518304Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 22:57:26.215700 dockerd[2377]: time="2025-11-23T22:57:26.215666312Z" level=info msg="Initializing buildkit" Nov 23 22:57:26.262761 dockerd[2377]: time="2025-11-23T22:57:26.262696448Z" level=info msg="Completed buildkit initialization" Nov 23 22:57:26.280037 dockerd[2377]: time="2025-11-23T22:57:26.279956648Z" level=info msg="Daemon has completed initialization" Nov 23 22:57:26.280434 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 22:57:26.281757 dockerd[2377]: time="2025-11-23T22:57:26.281541608Z" level=info msg="API listen on /run/docker.sock" Nov 23 22:57:26.646503 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1600696779-merged.mount: Deactivated successfully. Nov 23 22:57:27.423621 containerd[2005]: time="2025-11-23T22:57:27.423525346Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 23 22:57:28.090070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3928364885.mount: Deactivated successfully. Nov 23 22:57:29.564758 containerd[2005]: time="2025-11-23T22:57:29.564676669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:29.567486 containerd[2005]: time="2025-11-23T22:57:29.567417001Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431959" Nov 23 22:57:29.570029 containerd[2005]: time="2025-11-23T22:57:29.569956921Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:29.575563 containerd[2005]: time="2025-11-23T22:57:29.575488285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:29.577734 containerd[2005]: time="2025-11-23T22:57:29.577449805Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 2.153837687s" Nov 23 22:57:29.577734 containerd[2005]: time="2025-11-23T22:57:29.577515001Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Nov 23 22:57:29.578590 containerd[2005]: time="2025-11-23T22:57:29.578551105Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 23 22:57:31.012010 containerd[2005]: time="2025-11-23T22:57:31.010224504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:31.012010 containerd[2005]: time="2025-11-23T22:57:31.011957700Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618955" Nov 23 22:57:31.012759 containerd[2005]: time="2025-11-23T22:57:31.012719064Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:31.017338 containerd[2005]: time="2025-11-23T22:57:31.017287740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:31.019418 containerd[2005]: time="2025-11-23T22:57:31.019372044Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.440061687s" Nov 23 22:57:31.019552 containerd[2005]: time="2025-11-23T22:57:31.019524804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Nov 23 22:57:31.020974 containerd[2005]: time="2025-11-23T22:57:31.020915412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 23 22:57:32.306279 containerd[2005]: time="2025-11-23T22:57:32.306192170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:32.309131 containerd[2005]: time="2025-11-23T22:57:32.309050642Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618436" Nov 23 22:57:32.311374 containerd[2005]: time="2025-11-23T22:57:32.311304386Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:32.316512 containerd[2005]: time="2025-11-23T22:57:32.316406918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:32.318315 containerd[2005]: time="2025-11-23T22:57:32.317732066Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.296567798s" Nov 23 22:57:32.318315 containerd[2005]: time="2025-11-23T22:57:32.317787302Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Nov 23 22:57:32.318726 containerd[2005]: time="2025-11-23T22:57:32.318673814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 23 22:57:32.565410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 22:57:32.572704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:32.958161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:32.976805 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:57:33.163109 kubelet[2663]: E1123 22:57:33.163025 2663 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:57:33.172797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:57:33.173136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:57:33.175432 systemd[1]: kubelet.service: Consumed 328ms CPU time, 107.6M memory peak. Nov 23 22:57:33.754125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4243214939.mount: Deactivated successfully. Nov 23 22:57:34.342282 containerd[2005]: time="2025-11-23T22:57:34.341469664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:34.345109 containerd[2005]: time="2025-11-23T22:57:34.345024928Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561799" Nov 23 22:57:34.347176 containerd[2005]: time="2025-11-23T22:57:34.347136184Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:34.351843 containerd[2005]: time="2025-11-23T22:57:34.351757552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:34.353139 containerd[2005]: time="2025-11-23T22:57:34.352885768Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 2.034152038s" Nov 23 22:57:34.353139 containerd[2005]: time="2025-11-23T22:57:34.352943092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Nov 23 22:57:34.354417 containerd[2005]: time="2025-11-23T22:57:34.354378928Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 23 22:57:34.951610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237214513.mount: Deactivated successfully. Nov 23 22:57:36.206644 containerd[2005]: time="2025-11-23T22:57:36.206561694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:36.208419 containerd[2005]: time="2025-11-23T22:57:36.208345398Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Nov 23 22:57:36.211076 containerd[2005]: time="2025-11-23T22:57:36.210999726Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:36.217312 containerd[2005]: time="2025-11-23T22:57:36.216568014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:36.218731 containerd[2005]: time="2025-11-23T22:57:36.218492970Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.863926662s" Nov 23 22:57:36.218731 containerd[2005]: time="2025-11-23T22:57:36.218551590Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 23 22:57:36.219348 containerd[2005]: time="2025-11-23T22:57:36.219307518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 22:57:36.682128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount643748327.mount: Deactivated successfully. Nov 23 22:57:36.695639 containerd[2005]: time="2025-11-23T22:57:36.694391084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:57:36.696287 containerd[2005]: time="2025-11-23T22:57:36.696233636Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 23 22:57:36.698754 containerd[2005]: time="2025-11-23T22:57:36.698705840Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:57:36.703308 containerd[2005]: time="2025-11-23T22:57:36.703154588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:57:36.704478 containerd[2005]: time="2025-11-23T22:57:36.704421776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 484.114646ms" Nov 23 22:57:36.704554 containerd[2005]: time="2025-11-23T22:57:36.704475500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 22:57:36.705211 containerd[2005]: time="2025-11-23T22:57:36.705134024Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 23 22:57:37.319912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814478183.mount: Deactivated successfully. Nov 23 22:57:39.758323 containerd[2005]: time="2025-11-23T22:57:39.757806431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:39.760027 containerd[2005]: time="2025-11-23T22:57:39.759960503Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Nov 23 22:57:39.763015 containerd[2005]: time="2025-11-23T22:57:39.762923699Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:39.777043 containerd[2005]: time="2025-11-23T22:57:39.776947403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:57:39.780086 containerd[2005]: time="2025-11-23T22:57:39.778315199Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.073120263s" Nov 23 22:57:39.780086 containerd[2005]: time="2025-11-23T22:57:39.778376627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 23 22:57:43.423714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 22:57:43.429570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:43.779524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:43.791721 (kubelet)[2816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:57:43.874695 kubelet[2816]: E1123 22:57:43.874636 2816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:57:43.880502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:57:43.880985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:57:43.882017 systemd[1]: kubelet.service: Consumed 294ms CPU time, 104.6M memory peak. Nov 23 22:57:47.168750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:47.169071 systemd[1]: kubelet.service: Consumed 294ms CPU time, 104.6M memory peak. Nov 23 22:57:47.178590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:47.217385 systemd[1]: Reload requested from client PID 2830 ('systemctl') (unit session-7.scope)... Nov 23 22:57:47.217418 systemd[1]: Reloading... Nov 23 22:57:47.459307 zram_generator::config[2878]: No configuration found. Nov 23 22:57:47.910678 systemd[1]: Reloading finished in 692 ms. Nov 23 22:57:48.002017 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 23 22:57:48.002184 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 23 22:57:48.004341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:48.004415 systemd[1]: kubelet.service: Consumed 224ms CPU time, 95M memory peak. Nov 23 22:57:48.007561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:48.335987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:48.356819 (kubelet)[2938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:57:48.434542 kubelet[2938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:57:48.435028 kubelet[2938]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:57:48.435112 kubelet[2938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:57:48.435408 kubelet[2938]: I1123 22:57:48.435352 2938 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:57:49.615234 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 23 22:57:50.785776 kubelet[2938]: I1123 22:57:50.785703 2938 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 22:57:50.785776 kubelet[2938]: I1123 22:57:50.785756 2938 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:57:50.786402 kubelet[2938]: I1123 22:57:50.786213 2938 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 22:57:50.847279 kubelet[2938]: E1123 22:57:50.846766 2938 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.147:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:57:50.851441 kubelet[2938]: I1123 22:57:50.851375 2938 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:57:50.861176 kubelet[2938]: I1123 22:57:50.861144 2938 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:57:50.867387 kubelet[2938]: I1123 22:57:50.867243 2938 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 22:57:50.868369 kubelet[2938]: I1123 22:57:50.867897 2938 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:57:50.868369 kubelet[2938]: I1123 22:57:50.867940 2938 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-147","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:57:50.868638 kubelet[2938]: I1123 22:57:50.868617 2938 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:57:50.868730 kubelet[2938]: I1123 22:57:50.868714 2938 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 22:57:50.869144 kubelet[2938]: I1123 22:57:50.869123 2938 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:57:50.874885 kubelet[2938]: I1123 22:57:50.874847 2938 kubelet.go:446] "Attempting to sync node with API server" Nov 23 22:57:50.875059 kubelet[2938]: I1123 22:57:50.875037 2938 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:57:50.875180 kubelet[2938]: I1123 22:57:50.875163 2938 kubelet.go:352] "Adding apiserver pod source" Nov 23 22:57:50.875304 kubelet[2938]: I1123 22:57:50.875286 2938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:57:50.882158 kubelet[2938]: W1123 22:57:50.881062 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-147&limit=500&resourceVersion=0": dial tcp 172.31.17.147:6443: connect: connection refused Nov 23 22:57:50.882158 kubelet[2938]: E1123 22:57:50.881167 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-147&limit=500&resourceVersion=0\": dial tcp 172.31.17.147:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:57:50.882158 kubelet[2938]: W1123 22:57:50.881810 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.147:6443: connect: connection refused Nov 23 22:57:50.882158 kubelet[2938]: E1123 22:57:50.881878 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.147:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:57:50.882521 kubelet[2938]: I1123 22:57:50.882481 2938 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:57:50.885337 kubelet[2938]: I1123 22:57:50.885234 2938 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 22:57:50.885660 kubelet[2938]: W1123 22:57:50.885624 2938 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 22:57:50.892405 kubelet[2938]: I1123 22:57:50.892369 2938 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 22:57:50.892609 kubelet[2938]: I1123 22:57:50.892592 2938 server.go:1287] "Started kubelet" Nov 23 22:57:50.902761 kubelet[2938]: I1123 22:57:50.902723 2938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:57:50.910531 kubelet[2938]: I1123 22:57:50.910450 2938 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:57:50.912178 kubelet[2938]: I1123 22:57:50.912128 2938 server.go:479] "Adding debug handlers to kubelet server" Nov 23 22:57:50.914081 kubelet[2938]: I1123 22:57:50.913976 2938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:57:50.916168 kubelet[2938]: I1123 22:57:50.915387 2938 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 22:57:50.916168 kubelet[2938]: E1123 22:57:50.915916 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-147\" not found" Nov 23 22:57:50.917074 kubelet[2938]: I1123 22:57:50.917041 2938 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:57:50.920690 kubelet[2938]: I1123 22:57:50.917761 2938 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:57:50.922333 kubelet[2938]: E1123 22:57:50.917915 2938 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.147:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.147:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-147.187ac4e01c2031c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-147,UID:ip-172-31-17-147,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-147,},FirstTimestamp:2025-11-23 22:57:50.892560838 +0000 UTC m=+2.529271633,LastTimestamp:2025-11-23 22:57:50.892560838 +0000 UTC m=+2.529271633,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-147,}" Nov 23 22:57:50.922654 kubelet[2938]: I1123 22:57:50.920022 2938 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 22:57:50.922761 kubelet[2938]: I1123 22:57:50.920124 2938 reconciler.go:26] "Reconciler: start to sync state" Nov 23 22:57:50.922866 kubelet[2938]: W1123 22:57:50.921216 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.147:6443: connect: connection refused Nov 23 22:57:50.923019 kubelet[2938]: E1123 22:57:50.922991 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.147:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:57:50.923116 kubelet[2938]: I1123 22:57:50.922174 2938 factory.go:221] Registration of the systemd container factory successfully Nov 23 22:57:50.923283 kubelet[2938]: E1123 22:57:50.923221 2938 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:57:50.923477 kubelet[2938]: I1123 22:57:50.923448 2938 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:57:50.923784 kubelet[2938]: E1123 22:57:50.921585 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-147?timeout=10s\": dial tcp 172.31.17.147:6443: connect: connection refused" interval="200ms" Nov 23 22:57:50.926127 kubelet[2938]: I1123 22:57:50.926014 2938 factory.go:221] Registration of the containerd container factory successfully Nov 23 22:57:50.954773 kubelet[2938]: I1123 22:57:50.954694 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 22:57:50.960684 kubelet[2938]: I1123 22:57:50.959411 2938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 22:57:50.960684 kubelet[2938]: I1123 22:57:50.959450 2938 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 22:57:50.960684 kubelet[2938]: I1123 22:57:50.959483 2938 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:57:50.960684 kubelet[2938]: I1123 22:57:50.959496 2938 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 22:57:50.960684 kubelet[2938]: E1123 22:57:50.959568 2938 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:57:50.960684 kubelet[2938]: W1123 22:57:50.960341 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.147:6443: connect: connection refused Nov 23 22:57:50.960684 kubelet[2938]: E1123 22:57:50.960420 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.147:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:57:50.963037 kubelet[2938]: I1123 22:57:50.962982 2938 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:57:50.963217 kubelet[2938]: I1123 22:57:50.963197 2938 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:57:50.963370 kubelet[2938]: I1123 22:57:50.963353 2938 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:57:50.969793 kubelet[2938]: I1123 22:57:50.969759 2938 policy_none.go:49] "None policy: Start" Nov 23 22:57:50.969966 kubelet[2938]: I1123 22:57:50.969947 2938 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 22:57:50.970071 kubelet[2938]: I1123 22:57:50.970054 2938 state_mem.go:35] "Initializing new in-memory state store" Nov 23 22:57:50.984043 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 22:57:51.008438 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 22:57:51.015721 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 22:57:51.016024 kubelet[2938]: E1123 22:57:51.015978 2938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-147\" not found" Nov 23 22:57:51.028006 kubelet[2938]: I1123 22:57:51.027965 2938 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 22:57:51.028669 kubelet[2938]: I1123 22:57:51.028283 2938 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:57:51.028669 kubelet[2938]: I1123 22:57:51.028317 2938 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:57:51.029625 kubelet[2938]: I1123 22:57:51.029580 2938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:57:51.032317 kubelet[2938]: E1123 22:57:51.032042 2938 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:57:51.032317 kubelet[2938]: E1123 22:57:51.032107 2938 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-147\" not found" Nov 23 22:57:51.080010 systemd[1]: Created slice kubepods-burstable-poda3d612a9bd1d4eae54a029d731c47f4b.slice - libcontainer container kubepods-burstable-poda3d612a9bd1d4eae54a029d731c47f4b.slice. Nov 23 22:57:51.103922 kubelet[2938]: E1123 22:57:51.103873 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:51.109587 systemd[1]: Created slice kubepods-burstable-pod2635ab1637fa7ef6f952d2e3c8f0e275.slice - libcontainer container kubepods-burstable-pod2635ab1637fa7ef6f952d2e3c8f0e275.slice. Nov 23 22:57:51.125281 kubelet[2938]: E1123 22:57:51.124639 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:51.125692 kubelet[2938]: E1123 22:57:51.125612 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-147?timeout=10s\": dial tcp 172.31.17.147:6443: connect: connection refused" interval="400ms" Nov 23 22:57:51.131056 systemd[1]: Created slice kubepods-burstable-pod86dec51b30415bb66a45a2157257207f.slice - libcontainer container kubepods-burstable-pod86dec51b30415bb66a45a2157257207f.slice. Nov 23 22:57:51.134066 kubelet[2938]: I1123 22:57:51.134021 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-147" Nov 23 22:57:51.135310 kubelet[2938]: E1123 22:57:51.135181 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.147:6443/api/v1/nodes\": dial tcp 172.31.17.147:6443: connect: connection refused" node="ip-172-31-17-147" Nov 23 22:57:51.137093 kubelet[2938]: E1123 22:57:51.136757 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:51.224134 kubelet[2938]: I1123 22:57:51.224098 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:57:51.224380 kubelet[2938]: I1123 22:57:51.224353 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86dec51b30415bb66a45a2157257207f-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-147\" (UID: \"86dec51b30415bb66a45a2157257207f\") " pod="kube-system/kube-scheduler-ip-172-31-17-147" Nov 23 22:57:51.224601 kubelet[2938]: I1123 22:57:51.224546 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3d612a9bd1d4eae54a029d731c47f4b-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-147\" (UID: \"a3d612a9bd1d4eae54a029d731c47f4b\") " pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:57:51.224734 kubelet[2938]: I1123 22:57:51.224711 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:57:51.224923 kubelet[2938]: I1123 22:57:51.224870 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:57:51.225072 kubelet[2938]: I1123 22:57:51.225040 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:57:51.225246 kubelet[2938]: I1123 22:57:51.225224 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3d612a9bd1d4eae54a029d731c47f4b-ca-certs\") pod \"kube-apiserver-ip-172-31-17-147\" (UID: \"a3d612a9bd1d4eae54a029d731c47f4b\") " pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:57:51.225467 kubelet[2938]: I1123 22:57:51.225413 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3d612a9bd1d4eae54a029d731c47f4b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-147\" (UID: \"a3d612a9bd1d4eae54a029d731c47f4b\") " pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:57:51.225585 kubelet[2938]: I1123 22:57:51.225563 2938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:57:51.337781 kubelet[2938]: I1123 22:57:51.337648 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-147" Nov 23 22:57:51.338163 kubelet[2938]: E1123 22:57:51.338107 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.147:6443/api/v1/nodes\": dial tcp 172.31.17.147:6443: connect: connection refused" node="ip-172-31-17-147" Nov 23 22:57:51.406311 containerd[2005]: time="2025-11-23T22:57:51.405725757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-147,Uid:a3d612a9bd1d4eae54a029d731c47f4b,Namespace:kube-system,Attempt:0,}" Nov 23 22:57:51.428309 containerd[2005]: time="2025-11-23T22:57:51.427885521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-147,Uid:2635ab1637fa7ef6f952d2e3c8f0e275,Namespace:kube-system,Attempt:0,}" Nov 23 22:57:51.438662 containerd[2005]: time="2025-11-23T22:57:51.438616545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-147,Uid:86dec51b30415bb66a45a2157257207f,Namespace:kube-system,Attempt:0,}" Nov 23 22:57:51.461126 containerd[2005]: time="2025-11-23T22:57:51.461042493Z" level=info msg="connecting to shim c68a1e27f1f4d733e4ac732f66e0389e8318313ece778b222167669af899f49c" address="unix:///run/containerd/s/f6a276bf4bc1eb29495981aa0946bb5fe411d5c23b8abd8795b083f7381ae3c1" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:57:51.524218 containerd[2005]: time="2025-11-23T22:57:51.524137642Z" level=info msg="connecting to shim 00215b53c538eb97cc99bfa983e79880a87a254c63092f4eec162afa371b3ea4" address="unix:///run/containerd/s/63baa3976780733990aedb1481ab4de9042cf93aa2327f6d133030eff7b437f4" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:57:51.527368 kubelet[2938]: E1123 22:57:51.527187 2938 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-147?timeout=10s\": dial tcp 172.31.17.147:6443: connect: connection refused" interval="800ms" Nov 23 22:57:51.534173 containerd[2005]: time="2025-11-23T22:57:51.534095986Z" level=info msg="connecting to shim 5b5f39c35ad7ee7c309522d7ddc95cf5063be0bd2499e41c96165fadf9683e00" address="unix:///run/containerd/s/aa0d46234440cdaf8fdb558b773555e13d1fd0ccadcee72ecde4201500538e6a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:57:51.567543 systemd[1]: Started cri-containerd-c68a1e27f1f4d733e4ac732f66e0389e8318313ece778b222167669af899f49c.scope - libcontainer container c68a1e27f1f4d733e4ac732f66e0389e8318313ece778b222167669af899f49c. Nov 23 22:57:51.608848 systemd[1]: Started cri-containerd-00215b53c538eb97cc99bfa983e79880a87a254c63092f4eec162afa371b3ea4.scope - libcontainer container 00215b53c538eb97cc99bfa983e79880a87a254c63092f4eec162afa371b3ea4. Nov 23 22:57:51.631101 systemd[1]: Started cri-containerd-5b5f39c35ad7ee7c309522d7ddc95cf5063be0bd2499e41c96165fadf9683e00.scope - libcontainer container 5b5f39c35ad7ee7c309522d7ddc95cf5063be0bd2499e41c96165fadf9683e00. Nov 23 22:57:51.734703 containerd[2005]: time="2025-11-23T22:57:51.734588327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-147,Uid:a3d612a9bd1d4eae54a029d731c47f4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c68a1e27f1f4d733e4ac732f66e0389e8318313ece778b222167669af899f49c\"" Nov 23 22:57:51.742379 kubelet[2938]: I1123 22:57:51.742277 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-147" Nov 23 22:57:51.743622 kubelet[2938]: E1123 22:57:51.743339 2938 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.17.147:6443/api/v1/nodes\": dial tcp 172.31.17.147:6443: connect: connection refused" node="ip-172-31-17-147" Nov 23 22:57:51.747838 containerd[2005]: time="2025-11-23T22:57:51.747792563Z" level=info msg="CreateContainer within sandbox \"c68a1e27f1f4d733e4ac732f66e0389e8318313ece778b222167669af899f49c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 22:57:51.773679 containerd[2005]: time="2025-11-23T22:57:51.773563535Z" level=info msg="Container 47b296ffc5eeb5fc3d506f265cc374f5e5b2abcca89d39e96422a0e8f4a02b93: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:57:51.779542 containerd[2005]: time="2025-11-23T22:57:51.779477495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-147,Uid:86dec51b30415bb66a45a2157257207f,Namespace:kube-system,Attempt:0,} returns sandbox id \"00215b53c538eb97cc99bfa983e79880a87a254c63092f4eec162afa371b3ea4\"" Nov 23 22:57:51.787214 containerd[2005]: time="2025-11-23T22:57:51.787134971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-147,Uid:2635ab1637fa7ef6f952d2e3c8f0e275,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b5f39c35ad7ee7c309522d7ddc95cf5063be0bd2499e41c96165fadf9683e00\"" Nov 23 22:57:51.789440 containerd[2005]: time="2025-11-23T22:57:51.789373847Z" level=info msg="CreateContainer within sandbox \"00215b53c538eb97cc99bfa983e79880a87a254c63092f4eec162afa371b3ea4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 22:57:51.792836 containerd[2005]: time="2025-11-23T22:57:51.792787979Z" level=info msg="CreateContainer within sandbox \"5b5f39c35ad7ee7c309522d7ddc95cf5063be0bd2499e41c96165fadf9683e00\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 22:57:51.802730 containerd[2005]: time="2025-11-23T22:57:51.802658483Z" level=info msg="CreateContainer within sandbox \"c68a1e27f1f4d733e4ac732f66e0389e8318313ece778b222167669af899f49c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"47b296ffc5eeb5fc3d506f265cc374f5e5b2abcca89d39e96422a0e8f4a02b93\"" Nov 23 22:57:51.805290 containerd[2005]: time="2025-11-23T22:57:51.804453983Z" level=info msg="StartContainer for \"47b296ffc5eeb5fc3d506f265cc374f5e5b2abcca89d39e96422a0e8f4a02b93\"" Nov 23 22:57:51.807710 containerd[2005]: time="2025-11-23T22:57:51.807649259Z" level=info msg="connecting to shim 47b296ffc5eeb5fc3d506f265cc374f5e5b2abcca89d39e96422a0e8f4a02b93" address="unix:///run/containerd/s/f6a276bf4bc1eb29495981aa0946bb5fe411d5c23b8abd8795b083f7381ae3c1" protocol=ttrpc version=3 Nov 23 22:57:51.817317 containerd[2005]: time="2025-11-23T22:57:51.817244411Z" level=info msg="Container e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:57:51.824198 containerd[2005]: time="2025-11-23T22:57:51.824146367Z" level=info msg="Container 5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:57:51.834756 containerd[2005]: time="2025-11-23T22:57:51.834702011Z" level=info msg="CreateContainer within sandbox \"00215b53c538eb97cc99bfa983e79880a87a254c63092f4eec162afa371b3ea4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36\"" Nov 23 22:57:51.839775 containerd[2005]: time="2025-11-23T22:57:51.839614175Z" level=info msg="StartContainer for \"e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36\"" Nov 23 22:57:51.844599 containerd[2005]: time="2025-11-23T22:57:51.844544519Z" level=info msg="connecting to shim e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36" address="unix:///run/containerd/s/63baa3976780733990aedb1481ab4de9042cf93aa2327f6d133030eff7b437f4" protocol=ttrpc version=3 Nov 23 22:57:51.848285 containerd[2005]: time="2025-11-23T22:57:51.846525263Z" level=info msg="CreateContainer within sandbox \"5b5f39c35ad7ee7c309522d7ddc95cf5063be0bd2499e41c96165fadf9683e00\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5\"" Nov 23 22:57:51.848285 containerd[2005]: time="2025-11-23T22:57:51.847310219Z" level=info msg="StartContainer for \"5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5\"" Nov 23 22:57:51.850792 containerd[2005]: time="2025-11-23T22:57:51.850190327Z" level=info msg="connecting to shim 5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5" address="unix:///run/containerd/s/aa0d46234440cdaf8fdb558b773555e13d1fd0ccadcee72ecde4201500538e6a" protocol=ttrpc version=3 Nov 23 22:57:51.857982 systemd[1]: Started cri-containerd-47b296ffc5eeb5fc3d506f265cc374f5e5b2abcca89d39e96422a0e8f4a02b93.scope - libcontainer container 47b296ffc5eeb5fc3d506f265cc374f5e5b2abcca89d39e96422a0e8f4a02b93. Nov 23 22:57:51.874177 kubelet[2938]: W1123 22:57:51.874010 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.147:6443: connect: connection refused Nov 23 22:57:51.874177 kubelet[2938]: E1123 22:57:51.874112 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.147:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:57:51.934605 systemd[1]: Started cri-containerd-e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36.scope - libcontainer container e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36. Nov 23 22:57:51.953624 systemd[1]: Started cri-containerd-5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5.scope - libcontainer container 5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5. Nov 23 22:57:51.987865 kubelet[2938]: W1123 22:57:51.987776 2938 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.147:6443: connect: connection refused Nov 23 22:57:51.988013 kubelet[2938]: E1123 22:57:51.987879 2938 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.147:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:57:52.027780 containerd[2005]: time="2025-11-23T22:57:52.027709280Z" level=info msg="StartContainer for \"47b296ffc5eeb5fc3d506f265cc374f5e5b2abcca89d39e96422a0e8f4a02b93\" returns successfully" Nov 23 22:57:52.134402 containerd[2005]: time="2025-11-23T22:57:52.133404909Z" level=info msg="StartContainer for \"e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36\" returns successfully" Nov 23 22:57:52.138841 containerd[2005]: time="2025-11-23T22:57:52.138362829Z" level=info msg="StartContainer for \"5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5\" returns successfully" Nov 23 22:57:52.546593 kubelet[2938]: I1123 22:57:52.546456 2938 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-147" Nov 23 22:57:53.013033 kubelet[2938]: E1123 22:57:53.012822 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:53.023325 kubelet[2938]: E1123 22:57:53.023241 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:53.029273 kubelet[2938]: E1123 22:57:53.027733 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:54.029678 kubelet[2938]: E1123 22:57:54.029620 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:54.030223 kubelet[2938]: E1123 22:57:54.030166 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:54.031804 kubelet[2938]: E1123 22:57:54.031756 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:55.035400 kubelet[2938]: E1123 22:57:55.033938 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:55.035400 kubelet[2938]: E1123 22:57:55.033991 2938 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:55.883736 kubelet[2938]: I1123 22:57:55.883306 2938 apiserver.go:52] "Watching apiserver" Nov 23 22:57:55.923209 kubelet[2938]: I1123 22:57:55.923170 2938 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 22:57:55.945839 kubelet[2938]: E1123 22:57:55.945795 2938 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-147\" not found" node="ip-172-31-17-147" Nov 23 22:57:56.079813 kubelet[2938]: I1123 22:57:56.079748 2938 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-147" Nov 23 22:57:56.122372 kubelet[2938]: I1123 22:57:56.121608 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-147" Nov 23 22:57:56.171772 kubelet[2938]: E1123 22:57:56.171334 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-147\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-17-147" Nov 23 22:57:56.171772 kubelet[2938]: I1123 22:57:56.171380 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:57:56.176306 kubelet[2938]: E1123 22:57:56.175540 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-147\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:57:56.176569 kubelet[2938]: I1123 22:57:56.176411 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:57:56.183864 kubelet[2938]: E1123 22:57:56.183809 2938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-147\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:57:57.933312 kubelet[2938]: I1123 22:57:57.933119 2938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:57:58.327289 systemd[1]: Reload requested from client PID 3218 ('systemctl') (unit session-7.scope)... Nov 23 22:57:58.327774 systemd[1]: Reloading... Nov 23 22:57:58.530307 zram_generator::config[3265]: No configuration found. Nov 23 22:57:59.028379 systemd[1]: Reloading finished in 699 ms. Nov 23 22:57:59.085609 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:59.102961 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 22:57:59.103459 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:59.103539 systemd[1]: kubelet.service: Consumed 3.256s CPU time, 128.4M memory peak. Nov 23 22:57:59.108700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:57:59.456180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:57:59.470879 (kubelet)[3322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:57:59.583301 kubelet[3322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:57:59.583301 kubelet[3322]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:57:59.583301 kubelet[3322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:57:59.583301 kubelet[3322]: I1123 22:57:59.582485 3322 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:57:59.596074 kubelet[3322]: I1123 22:57:59.596013 3322 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 22:57:59.596074 kubelet[3322]: I1123 22:57:59.596064 3322 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:57:59.596566 kubelet[3322]: I1123 22:57:59.596527 3322 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 22:57:59.599570 kubelet[3322]: I1123 22:57:59.599351 3322 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 22:57:59.606375 kubelet[3322]: I1123 22:57:59.606323 3322 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:57:59.623975 kubelet[3322]: I1123 22:57:59.623771 3322 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:57:59.630432 kubelet[3322]: I1123 22:57:59.630202 3322 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 22:57:59.631670 kubelet[3322]: I1123 22:57:59.630754 3322 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:57:59.631670 kubelet[3322]: I1123 22:57:59.630812 3322 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-147","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:57:59.631670 kubelet[3322]: I1123 22:57:59.631130 3322 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:57:59.631670 kubelet[3322]: I1123 22:57:59.631151 3322 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 22:57:59.632010 kubelet[3322]: I1123 22:57:59.631228 3322 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:57:59.632010 kubelet[3322]: I1123 22:57:59.631498 3322 kubelet.go:446] "Attempting to sync node with API server" Nov 23 22:57:59.632010 kubelet[3322]: I1123 22:57:59.631524 3322 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:57:59.634030 kubelet[3322]: I1123 22:57:59.632804 3322 kubelet.go:352] "Adding apiserver pod source" Nov 23 22:57:59.634030 kubelet[3322]: I1123 22:57:59.632868 3322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:57:59.638076 kubelet[3322]: I1123 22:57:59.635301 3322 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:57:59.638076 kubelet[3322]: I1123 22:57:59.636062 3322 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 22:57:59.638076 kubelet[3322]: I1123 22:57:59.637860 3322 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 22:57:59.638076 kubelet[3322]: I1123 22:57:59.637917 3322 server.go:1287] "Started kubelet" Nov 23 22:57:59.644108 kubelet[3322]: I1123 22:57:59.644056 3322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:57:59.663715 kubelet[3322]: I1123 22:57:59.663601 3322 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:57:59.668845 kubelet[3322]: I1123 22:57:59.668795 3322 server.go:479] "Adding debug handlers to kubelet server" Nov 23 22:57:59.673834 kubelet[3322]: I1123 22:57:59.671663 3322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:57:59.673834 kubelet[3322]: I1123 22:57:59.671983 3322 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:57:59.674986 kubelet[3322]: I1123 22:57:59.674941 3322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:57:59.680551 kubelet[3322]: I1123 22:57:59.680473 3322 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 22:57:59.680885 kubelet[3322]: E1123 22:57:59.680841 3322 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-17-147\" not found" Nov 23 22:57:59.702288 kubelet[3322]: I1123 22:57:59.701475 3322 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 22:57:59.703209 kubelet[3322]: I1123 22:57:59.702695 3322 reconciler.go:26] "Reconciler: start to sync state" Nov 23 22:57:59.708289 kubelet[3322]: I1123 22:57:59.706892 3322 factory.go:221] Registration of the systemd container factory successfully Nov 23 22:57:59.708289 kubelet[3322]: I1123 22:57:59.707057 3322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:57:59.719091 kubelet[3322]: I1123 22:57:59.719033 3322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 22:57:59.731909 kubelet[3322]: I1123 22:57:59.730440 3322 factory.go:221] Registration of the containerd container factory successfully Nov 23 22:57:59.736958 kubelet[3322]: E1123 22:57:59.735283 3322 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:57:59.744297 kubelet[3322]: I1123 22:57:59.743716 3322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 22:57:59.744442 kubelet[3322]: I1123 22:57:59.744394 3322 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 22:57:59.744442 kubelet[3322]: I1123 22:57:59.744436 3322 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:57:59.744715 kubelet[3322]: I1123 22:57:59.744680 3322 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 22:57:59.752825 kubelet[3322]: E1123 22:57:59.752727 3322 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:57:59.846730 kubelet[3322]: I1123 22:57:59.846677 3322 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:57:59.846730 kubelet[3322]: I1123 22:57:59.846710 3322 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:57:59.846882 kubelet[3322]: I1123 22:57:59.846744 3322 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:57:59.847532 kubelet[3322]: I1123 22:57:59.847023 3322 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 22:57:59.847532 kubelet[3322]: I1123 22:57:59.847056 3322 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 22:57:59.847532 kubelet[3322]: I1123 22:57:59.847092 3322 policy_none.go:49] "None policy: Start" Nov 23 22:57:59.847532 kubelet[3322]: I1123 22:57:59.847111 3322 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 22:57:59.847532 kubelet[3322]: I1123 22:57:59.847131 3322 state_mem.go:35] "Initializing new in-memory state store" Nov 23 22:57:59.847532 kubelet[3322]: I1123 22:57:59.847356 3322 state_mem.go:75] "Updated machine memory state" Nov 23 22:57:59.853780 kubelet[3322]: E1123 22:57:59.853596 3322 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 23 22:57:59.857642 kubelet[3322]: I1123 22:57:59.857595 3322 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 22:57:59.859127 kubelet[3322]: I1123 22:57:59.859067 3322 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:57:59.859228 kubelet[3322]: I1123 22:57:59.859104 3322 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:57:59.862860 kubelet[3322]: I1123 22:57:59.862815 3322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:57:59.877191 kubelet[3322]: E1123 22:57:59.877136 3322 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:57:59.984314 kubelet[3322]: I1123 22:57:59.981551 3322 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-17-147" Nov 23 22:57:59.998290 kubelet[3322]: I1123 22:57:59.998200 3322 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-17-147" Nov 23 22:57:59.998409 kubelet[3322]: I1123 22:57:59.998386 3322 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-17-147" Nov 23 22:58:00.059572 kubelet[3322]: I1123 22:58:00.055654 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-147" Nov 23 22:58:00.059572 kubelet[3322]: I1123 22:58:00.056864 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:58:00.060977 kubelet[3322]: I1123 22:58:00.060367 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:58:00.078864 kubelet[3322]: E1123 22:58:00.078802 3322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-17-147\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:58:00.106280 kubelet[3322]: I1123 22:58:00.105348 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:58:00.106280 kubelet[3322]: I1123 22:58:00.105415 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:58:00.106280 kubelet[3322]: I1123 22:58:00.105456 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86dec51b30415bb66a45a2157257207f-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-147\" (UID: \"86dec51b30415bb66a45a2157257207f\") " pod="kube-system/kube-scheduler-ip-172-31-17-147" Nov 23 22:58:00.106280 kubelet[3322]: I1123 22:58:00.105494 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3d612a9bd1d4eae54a029d731c47f4b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-147\" (UID: \"a3d612a9bd1d4eae54a029d731c47f4b\") " pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:58:00.106280 kubelet[3322]: I1123 22:58:00.105537 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:58:00.106602 kubelet[3322]: I1123 22:58:00.105572 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:58:00.106602 kubelet[3322]: I1123 22:58:00.105630 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3d612a9bd1d4eae54a029d731c47f4b-ca-certs\") pod \"kube-apiserver-ip-172-31-17-147\" (UID: \"a3d612a9bd1d4eae54a029d731c47f4b\") " pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:58:00.106602 kubelet[3322]: I1123 22:58:00.105666 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3d612a9bd1d4eae54a029d731c47f4b-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-147\" (UID: \"a3d612a9bd1d4eae54a029d731c47f4b\") " pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:58:00.106764 kubelet[3322]: I1123 22:58:00.105701 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2635ab1637fa7ef6f952d2e3c8f0e275-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-147\" (UID: \"2635ab1637fa7ef6f952d2e3c8f0e275\") " pod="kube-system/kube-controller-manager-ip-172-31-17-147" Nov 23 22:58:00.650284 kubelet[3322]: I1123 22:58:00.650206 3322 apiserver.go:52] "Watching apiserver" Nov 23 22:58:00.702791 kubelet[3322]: I1123 22:58:00.702735 3322 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 22:58:00.810150 kubelet[3322]: I1123 22:58:00.809284 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-17-147" Nov 23 22:58:00.810150 kubelet[3322]: I1123 22:58:00.809650 3322 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:58:00.823559 kubelet[3322]: E1123 22:58:00.823514 3322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-17-147\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-147" Nov 23 22:58:00.834555 kubelet[3322]: E1123 22:58:00.834489 3322 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-17-147\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-147" Nov 23 22:58:00.866542 kubelet[3322]: I1123 22:58:00.866409 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-147" podStartSLOduration=0.8663888 podStartE2EDuration="866.3888ms" podCreationTimestamp="2025-11-23 22:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:00.865975088 +0000 UTC m=+1.385051768" watchObservedRunningTime="2025-11-23 22:58:00.8663888 +0000 UTC m=+1.385465456" Nov 23 22:58:00.911886 kubelet[3322]: I1123 22:58:00.911490 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-147" podStartSLOduration=0.911465132 podStartE2EDuration="911.465132ms" podCreationTimestamp="2025-11-23 22:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:00.887618888 +0000 UTC m=+1.406695556" watchObservedRunningTime="2025-11-23 22:58:00.911465132 +0000 UTC m=+1.430541788" Nov 23 22:58:00.933589 kubelet[3322]: I1123 22:58:00.933490 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-147" podStartSLOduration=3.933468488 podStartE2EDuration="3.933468488s" podCreationTimestamp="2025-11-23 22:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:00.9116672 +0000 UTC m=+1.430743868" watchObservedRunningTime="2025-11-23 22:58:00.933468488 +0000 UTC m=+1.452545168" Nov 23 22:58:02.745293 update_engine[1975]: I20251123 22:58:02.745193 1975 update_attempter.cc:509] Updating boot flags... Nov 23 22:58:04.944376 kubelet[3322]: I1123 22:58:04.944329 3322 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 22:58:04.945446 containerd[2005]: time="2025-11-23T22:58:04.945135648Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 22:58:04.947650 kubelet[3322]: I1123 22:58:04.945789 3322 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 22:58:05.605701 systemd[1]: Created slice kubepods-besteffort-pod8a8c5aa4_3d9d_4248_b15d_dff52ce3bf59.slice - libcontainer container kubepods-besteffort-pod8a8c5aa4_3d9d_4248_b15d_dff52ce3bf59.slice. Nov 23 22:58:05.641249 kubelet[3322]: I1123 22:58:05.641144 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59-lib-modules\") pod \"kube-proxy-dvgl7\" (UID: \"8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59\") " pod="kube-system/kube-proxy-dvgl7" Nov 23 22:58:05.641249 kubelet[3322]: I1123 22:58:05.641221 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59-xtables-lock\") pod \"kube-proxy-dvgl7\" (UID: \"8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59\") " pod="kube-system/kube-proxy-dvgl7" Nov 23 22:58:05.641629 kubelet[3322]: I1123 22:58:05.641353 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95zd2\" (UniqueName: \"kubernetes.io/projected/8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59-kube-api-access-95zd2\") pod \"kube-proxy-dvgl7\" (UID: \"8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59\") " pod="kube-system/kube-proxy-dvgl7" Nov 23 22:58:05.641749 kubelet[3322]: I1123 22:58:05.641432 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59-kube-proxy\") pod \"kube-proxy-dvgl7\" (UID: \"8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59\") " pod="kube-system/kube-proxy-dvgl7" Nov 23 22:58:05.920365 containerd[2005]: time="2025-11-23T22:58:05.919889893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvgl7,Uid:8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59,Namespace:kube-system,Attempt:0,}" Nov 23 22:58:05.960038 containerd[2005]: time="2025-11-23T22:58:05.959966869Z" level=info msg="connecting to shim 0a2eccdcb2c0be0817676aaa0eb2690714d726ef51690858c49388e1bd7cedda" address="unix:///run/containerd/s/8d6e7e45b65c8f5e86ebc0792933c85340e5562f1cf4f3121532bfa8a4107df2" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:06.029589 systemd[1]: Started cri-containerd-0a2eccdcb2c0be0817676aaa0eb2690714d726ef51690858c49388e1bd7cedda.scope - libcontainer container 0a2eccdcb2c0be0817676aaa0eb2690714d726ef51690858c49388e1bd7cedda. Nov 23 22:58:06.109648 systemd[1]: Created slice kubepods-besteffort-podb7a68a53_d574_4846_9408_c5e58911d7a5.slice - libcontainer container kubepods-besteffort-podb7a68a53_d574_4846_9408_c5e58911d7a5.slice. Nov 23 22:58:06.145877 kubelet[3322]: I1123 22:58:06.145813 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f27pp\" (UniqueName: \"kubernetes.io/projected/b7a68a53-d574-4846-9408-c5e58911d7a5-kube-api-access-f27pp\") pod \"tigera-operator-7dcd859c48-vqptz\" (UID: \"b7a68a53-d574-4846-9408-c5e58911d7a5\") " pod="tigera-operator/tigera-operator-7dcd859c48-vqptz" Nov 23 22:58:06.146628 kubelet[3322]: I1123 22:58:06.146535 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b7a68a53-d574-4846-9408-c5e58911d7a5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-vqptz\" (UID: \"b7a68a53-d574-4846-9408-c5e58911d7a5\") " pod="tigera-operator/tigera-operator-7dcd859c48-vqptz" Nov 23 22:58:06.154771 containerd[2005]: time="2025-11-23T22:58:06.154640818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvgl7,Uid:8a8c5aa4-3d9d-4248-b15d-dff52ce3bf59,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a2eccdcb2c0be0817676aaa0eb2690714d726ef51690858c49388e1bd7cedda\"" Nov 23 22:58:06.160526 containerd[2005]: time="2025-11-23T22:58:06.160472890Z" level=info msg="CreateContainer within sandbox \"0a2eccdcb2c0be0817676aaa0eb2690714d726ef51690858c49388e1bd7cedda\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 22:58:06.186370 containerd[2005]: time="2025-11-23T22:58:06.184228402Z" level=info msg="Container 3f0523b2949aa9a8e634b2621eeb7e3b199aa48bcc260981e4b0573407c1f5c3: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:06.205772 containerd[2005]: time="2025-11-23T22:58:06.205695155Z" level=info msg="CreateContainer within sandbox \"0a2eccdcb2c0be0817676aaa0eb2690714d726ef51690858c49388e1bd7cedda\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f0523b2949aa9a8e634b2621eeb7e3b199aa48bcc260981e4b0573407c1f5c3\"" Nov 23 22:58:06.206906 containerd[2005]: time="2025-11-23T22:58:06.206815571Z" level=info msg="StartContainer for \"3f0523b2949aa9a8e634b2621eeb7e3b199aa48bcc260981e4b0573407c1f5c3\"" Nov 23 22:58:06.213355 containerd[2005]: time="2025-11-23T22:58:06.212044343Z" level=info msg="connecting to shim 3f0523b2949aa9a8e634b2621eeb7e3b199aa48bcc260981e4b0573407c1f5c3" address="unix:///run/containerd/s/8d6e7e45b65c8f5e86ebc0792933c85340e5562f1cf4f3121532bfa8a4107df2" protocol=ttrpc version=3 Nov 23 22:58:06.252723 systemd[1]: Started cri-containerd-3f0523b2949aa9a8e634b2621eeb7e3b199aa48bcc260981e4b0573407c1f5c3.scope - libcontainer container 3f0523b2949aa9a8e634b2621eeb7e3b199aa48bcc260981e4b0573407c1f5c3. Nov 23 22:58:06.384119 containerd[2005]: time="2025-11-23T22:58:06.383971283Z" level=info msg="StartContainer for \"3f0523b2949aa9a8e634b2621eeb7e3b199aa48bcc260981e4b0573407c1f5c3\" returns successfully" Nov 23 22:58:06.424288 containerd[2005]: time="2025-11-23T22:58:06.423822276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vqptz,Uid:b7a68a53-d574-4846-9408-c5e58911d7a5,Namespace:tigera-operator,Attempt:0,}" Nov 23 22:58:06.472627 containerd[2005]: time="2025-11-23T22:58:06.472482396Z" level=info msg="connecting to shim c1e31043169603e3e275b0b4b4fd4faa455ae5234d34c099c5bf7954ab96914e" address="unix:///run/containerd/s/57200b57480db28b6d5ba0e31343d1efb222f2c63e32f69963c593f0727904b3" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:06.523559 systemd[1]: Started cri-containerd-c1e31043169603e3e275b0b4b4fd4faa455ae5234d34c099c5bf7954ab96914e.scope - libcontainer container c1e31043169603e3e275b0b4b4fd4faa455ae5234d34c099c5bf7954ab96914e. Nov 23 22:58:06.634557 containerd[2005]: time="2025-11-23T22:58:06.634364485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vqptz,Uid:b7a68a53-d574-4846-9408-c5e58911d7a5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c1e31043169603e3e275b0b4b4fd4faa455ae5234d34c099c5bf7954ab96914e\"" Nov 23 22:58:06.640722 containerd[2005]: time="2025-11-23T22:58:06.640503901Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 22:58:07.693134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634195613.mount: Deactivated successfully. Nov 23 22:58:08.571814 containerd[2005]: time="2025-11-23T22:58:08.571748822Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:08.573315 containerd[2005]: time="2025-11-23T22:58:08.573133622Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 22:58:08.574353 containerd[2005]: time="2025-11-23T22:58:08.574282418Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:08.578299 containerd[2005]: time="2025-11-23T22:58:08.577620518Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:08.579309 containerd[2005]: time="2025-11-23T22:58:08.579030122Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.938321537s" Nov 23 22:58:08.579309 containerd[2005]: time="2025-11-23T22:58:08.579085718Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 22:58:08.585109 containerd[2005]: time="2025-11-23T22:58:08.585062090Z" level=info msg="CreateContainer within sandbox \"c1e31043169603e3e275b0b4b4fd4faa455ae5234d34c099c5bf7954ab96914e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 22:58:08.594296 containerd[2005]: time="2025-11-23T22:58:08.593710022Z" level=info msg="Container e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:08.615101 containerd[2005]: time="2025-11-23T22:58:08.615037971Z" level=info msg="CreateContainer within sandbox \"c1e31043169603e3e275b0b4b4fd4faa455ae5234d34c099c5bf7954ab96914e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85\"" Nov 23 22:58:08.617710 containerd[2005]: time="2025-11-23T22:58:08.617651499Z" level=info msg="StartContainer for \"e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85\"" Nov 23 22:58:08.622495 containerd[2005]: time="2025-11-23T22:58:08.622388811Z" level=info msg="connecting to shim e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85" address="unix:///run/containerd/s/57200b57480db28b6d5ba0e31343d1efb222f2c63e32f69963c593f0727904b3" protocol=ttrpc version=3 Nov 23 22:58:08.672558 systemd[1]: Started cri-containerd-e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85.scope - libcontainer container e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85. Nov 23 22:58:08.728209 containerd[2005]: time="2025-11-23T22:58:08.728058807Z" level=info msg="StartContainer for \"e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85\" returns successfully" Nov 23 22:58:08.882646 kubelet[3322]: I1123 22:58:08.881149 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dvgl7" podStartSLOduration=3.8811247 podStartE2EDuration="3.8811247s" podCreationTimestamp="2025-11-23 22:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:06.878409374 +0000 UTC m=+7.397486054" watchObservedRunningTime="2025-11-23 22:58:08.8811247 +0000 UTC m=+9.400201356" Nov 23 22:58:15.600645 sudo[2359]: pam_unix(sudo:session): session closed for user root Nov 23 22:58:15.624515 sshd[2358]: Connection closed by 139.178.68.195 port 52538 Nov 23 22:58:15.625535 sshd-session[2355]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:15.639504 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 22:58:15.639952 systemd[1]: session-7.scope: Consumed 11.032s CPU time, 222.3M memory peak. Nov 23 22:58:15.644157 systemd[1]: sshd@6-172.31.17.147:22-139.178.68.195:52538.service: Deactivated successfully. Nov 23 22:58:15.658693 systemd-logind[1974]: Session 7 logged out. Waiting for processes to exit. Nov 23 22:58:15.663938 systemd-logind[1974]: Removed session 7. Nov 23 22:58:31.586580 kubelet[3322]: I1123 22:58:31.584708 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-vqptz" podStartSLOduration=24.641237568 podStartE2EDuration="26.584687353s" podCreationTimestamp="2025-11-23 22:58:05 +0000 UTC" firstStartedPulling="2025-11-23 22:58:06.637891273 +0000 UTC m=+7.156967941" lastFinishedPulling="2025-11-23 22:58:08.581341058 +0000 UTC m=+9.100417726" observedRunningTime="2025-11-23 22:58:08.881723188 +0000 UTC m=+9.400799856" watchObservedRunningTime="2025-11-23 22:58:31.584687353 +0000 UTC m=+32.103764021" Nov 23 22:58:31.603284 systemd[1]: Created slice kubepods-besteffort-poda4758881_5b90_4144_9163_99485c12391a.slice - libcontainer container kubepods-besteffort-poda4758881_5b90_4144_9163_99485c12391a.slice. Nov 23 22:58:31.622820 kubelet[3322]: I1123 22:58:31.622620 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a4758881-5b90-4144-9163-99485c12391a-typha-certs\") pod \"calico-typha-7864c7f867-n8d2k\" (UID: \"a4758881-5b90-4144-9163-99485c12391a\") " pod="calico-system/calico-typha-7864c7f867-n8d2k" Nov 23 22:58:31.623366 kubelet[3322]: I1123 22:58:31.623202 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd4z5\" (UniqueName: \"kubernetes.io/projected/a4758881-5b90-4144-9163-99485c12391a-kube-api-access-qd4z5\") pod \"calico-typha-7864c7f867-n8d2k\" (UID: \"a4758881-5b90-4144-9163-99485c12391a\") " pod="calico-system/calico-typha-7864c7f867-n8d2k" Nov 23 22:58:31.623790 kubelet[3322]: I1123 22:58:31.623669 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4758881-5b90-4144-9163-99485c12391a-tigera-ca-bundle\") pod \"calico-typha-7864c7f867-n8d2k\" (UID: \"a4758881-5b90-4144-9163-99485c12391a\") " pod="calico-system/calico-typha-7864c7f867-n8d2k" Nov 23 22:58:31.855308 kubelet[3322]: I1123 22:58:31.854927 3322 status_manager.go:890] "Failed to get status for pod" podUID="66a65c3c-42c4-4308-8b3c-7b79162ed287" pod="calico-system/calico-node-zmgh5" err="pods \"calico-node-zmgh5\" is forbidden: User \"system:node:ip-172-31-17-147\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-17-147' and this object" Nov 23 22:58:31.868760 systemd[1]: Created slice kubepods-besteffort-pod66a65c3c_42c4_4308_8b3c_7b79162ed287.slice - libcontainer container kubepods-besteffort-pod66a65c3c_42c4_4308_8b3c_7b79162ed287.slice. Nov 23 22:58:31.911851 containerd[2005]: time="2025-11-23T22:58:31.911801522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7864c7f867-n8d2k,Uid:a4758881-5b90-4144-9163-99485c12391a,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:31.931235 kubelet[3322]: I1123 22:58:31.927451 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/66a65c3c-42c4-4308-8b3c-7b79162ed287-var-lib-calico\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931235 kubelet[3322]: I1123 22:58:31.927516 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/66a65c3c-42c4-4308-8b3c-7b79162ed287-node-certs\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931235 kubelet[3322]: I1123 22:58:31.927559 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/66a65c3c-42c4-4308-8b3c-7b79162ed287-flexvol-driver-host\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931235 kubelet[3322]: I1123 22:58:31.927595 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/66a65c3c-42c4-4308-8b3c-7b79162ed287-var-run-calico\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931235 kubelet[3322]: I1123 22:58:31.927634 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/66a65c3c-42c4-4308-8b3c-7b79162ed287-cni-net-dir\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931631 kubelet[3322]: I1123 22:58:31.927670 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/66a65c3c-42c4-4308-8b3c-7b79162ed287-cni-bin-dir\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931631 kubelet[3322]: I1123 22:58:31.927703 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/66a65c3c-42c4-4308-8b3c-7b79162ed287-policysync\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931631 kubelet[3322]: I1123 22:58:31.927741 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfjbt\" (UniqueName: \"kubernetes.io/projected/66a65c3c-42c4-4308-8b3c-7b79162ed287-kube-api-access-vfjbt\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931631 kubelet[3322]: I1123 22:58:31.927777 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66a65c3c-42c4-4308-8b3c-7b79162ed287-tigera-ca-bundle\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931631 kubelet[3322]: I1123 22:58:31.927814 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/66a65c3c-42c4-4308-8b3c-7b79162ed287-cni-log-dir\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931867 kubelet[3322]: I1123 22:58:31.927855 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66a65c3c-42c4-4308-8b3c-7b79162ed287-lib-modules\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.931867 kubelet[3322]: I1123 22:58:31.927896 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66a65c3c-42c4-4308-8b3c-7b79162ed287-xtables-lock\") pod \"calico-node-zmgh5\" (UID: \"66a65c3c-42c4-4308-8b3c-7b79162ed287\") " pod="calico-system/calico-node-zmgh5" Nov 23 22:58:31.983642 containerd[2005]: time="2025-11-23T22:58:31.983573259Z" level=info msg="connecting to shim ce7f38ccca2eab55073f7cbd27db00e4984bcecdf22d26a444f2cb5325570b0a" address="unix:///run/containerd/s/33a13a29d0a724bd2c581681cbfeab357605a587fe3af6dd5c617f95ff3ebda6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:32.041684 kubelet[3322]: E1123 22:58:32.039197 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.041684 kubelet[3322]: W1123 22:58:32.041101 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.041684 kubelet[3322]: E1123 22:58:32.041149 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.044106 kubelet[3322]: E1123 22:58:32.044070 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.044327 kubelet[3322]: W1123 22:58:32.044298 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.047836 kubelet[3322]: E1123 22:58:32.047798 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.073542 kubelet[3322]: E1123 22:58:32.073463 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.073542 kubelet[3322]: W1123 22:58:32.073532 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.073758 kubelet[3322]: E1123 22:58:32.073589 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.089040 kubelet[3322]: E1123 22:58:32.088954 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.089040 kubelet[3322]: W1123 22:58:32.089023 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.089247 kubelet[3322]: E1123 22:58:32.089082 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.106961 systemd[1]: Started cri-containerd-ce7f38ccca2eab55073f7cbd27db00e4984bcecdf22d26a444f2cb5325570b0a.scope - libcontainer container ce7f38ccca2eab55073f7cbd27db00e4984bcecdf22d26a444f2cb5325570b0a. Nov 23 22:58:32.139898 kubelet[3322]: E1123 22:58:32.139165 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:58:32.177327 containerd[2005]: time="2025-11-23T22:58:32.176711760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zmgh5,Uid:66a65c3c-42c4-4308-8b3c-7b79162ed287,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:32.203013 kubelet[3322]: E1123 22:58:32.202931 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.204554 kubelet[3322]: W1123 22:58:32.203209 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.204554 kubelet[3322]: E1123 22:58:32.203524 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.206334 kubelet[3322]: E1123 22:58:32.205174 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.206334 kubelet[3322]: W1123 22:58:32.205413 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.206334 kubelet[3322]: E1123 22:58:32.205493 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.207034 kubelet[3322]: E1123 22:58:32.207001 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.207380 kubelet[3322]: W1123 22:58:32.207315 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.207777 kubelet[3322]: E1123 22:58:32.207599 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.209811 kubelet[3322]: E1123 22:58:32.209604 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.209811 kubelet[3322]: W1123 22:58:32.209639 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.209811 kubelet[3322]: E1123 22:58:32.209692 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.211593 kubelet[3322]: E1123 22:58:32.211406 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.211593 kubelet[3322]: W1123 22:58:32.211446 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.211593 kubelet[3322]: E1123 22:58:32.211480 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.213581 kubelet[3322]: E1123 22:58:32.213544 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.213850 kubelet[3322]: W1123 22:58:32.213715 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.213850 kubelet[3322]: E1123 22:58:32.213757 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.214363 kubelet[3322]: E1123 22:58:32.214328 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.215286 kubelet[3322]: W1123 22:58:32.214469 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.215286 kubelet[3322]: E1123 22:58:32.214504 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.215826 kubelet[3322]: E1123 22:58:32.215792 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.217406 kubelet[3322]: W1123 22:58:32.217340 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.217624 kubelet[3322]: E1123 22:58:32.217598 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.218179 kubelet[3322]: E1123 22:58:32.218150 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.218456 kubelet[3322]: W1123 22:58:32.218306 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.218456 kubelet[3322]: E1123 22:58:32.218337 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.219607 kubelet[3322]: E1123 22:58:32.219447 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.219607 kubelet[3322]: W1123 22:58:32.219480 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.219607 kubelet[3322]: E1123 22:58:32.219513 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.220276 kubelet[3322]: E1123 22:58:32.220216 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.220530 kubelet[3322]: W1123 22:58:32.220248 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.220530 kubelet[3322]: E1123 22:58:32.220431 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.220987 kubelet[3322]: E1123 22:58:32.220960 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.221232 kubelet[3322]: W1123 22:58:32.221090 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.221232 kubelet[3322]: E1123 22:58:32.221122 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.221649 kubelet[3322]: E1123 22:58:32.221626 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.221859 kubelet[3322]: W1123 22:58:32.221752 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.221859 kubelet[3322]: E1123 22:58:32.221783 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.222228 kubelet[3322]: E1123 22:58:32.222206 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.222477 kubelet[3322]: W1123 22:58:32.222306 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.222477 kubelet[3322]: E1123 22:58:32.222334 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.222946 kubelet[3322]: E1123 22:58:32.222921 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.223166 kubelet[3322]: W1123 22:58:32.223049 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.223166 kubelet[3322]: E1123 22:58:32.223079 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.223598 kubelet[3322]: E1123 22:58:32.223573 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.223858 kubelet[3322]: W1123 22:58:32.223707 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.223858 kubelet[3322]: E1123 22:58:32.223742 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.224711 kubelet[3322]: E1123 22:58:32.224485 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.224711 kubelet[3322]: W1123 22:58:32.224515 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.224711 kubelet[3322]: E1123 22:58:32.224541 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.226617 kubelet[3322]: E1123 22:58:32.225236 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.226617 kubelet[3322]: W1123 22:58:32.226337 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.226617 kubelet[3322]: E1123 22:58:32.226383 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.227783 kubelet[3322]: E1123 22:58:32.227595 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.227783 kubelet[3322]: W1123 22:58:32.227630 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.227783 kubelet[3322]: E1123 22:58:32.227660 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.228377 kubelet[3322]: E1123 22:58:32.228348 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.228511 kubelet[3322]: W1123 22:58:32.228485 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.228655 kubelet[3322]: E1123 22:58:32.228630 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.229743 containerd[2005]: time="2025-11-23T22:58:32.229687668Z" level=info msg="connecting to shim 9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f" address="unix:///run/containerd/s/dab2e52cbb8cbc34e7823325313afb9b21d4eaecf6fe00318d45f3d64da699d8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:32.233442 kubelet[3322]: E1123 22:58:32.233381 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.234044 kubelet[3322]: W1123 22:58:32.234006 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.234401 kubelet[3322]: E1123 22:58:32.234360 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.237119 kubelet[3322]: I1123 22:58:32.236928 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pvh6\" (UniqueName: \"kubernetes.io/projected/b6239d0a-f247-4ff7-8f39-2d2983756ead-kube-api-access-2pvh6\") pod \"csi-node-driver-rz2c9\" (UID: \"b6239d0a-f247-4ff7-8f39-2d2983756ead\") " pod="calico-system/csi-node-driver-rz2c9" Nov 23 22:58:32.238503 kubelet[3322]: E1123 22:58:32.238437 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.239393 kubelet[3322]: W1123 22:58:32.238605 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.239393 kubelet[3322]: E1123 22:58:32.238657 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.240887 kubelet[3322]: E1123 22:58:32.240493 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.240887 kubelet[3322]: W1123 22:58:32.240537 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.240887 kubelet[3322]: E1123 22:58:32.240596 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.240887 kubelet[3322]: I1123 22:58:32.240643 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b6239d0a-f247-4ff7-8f39-2d2983756ead-kubelet-dir\") pod \"csi-node-driver-rz2c9\" (UID: \"b6239d0a-f247-4ff7-8f39-2d2983756ead\") " pod="calico-system/csi-node-driver-rz2c9" Nov 23 22:58:32.242448 kubelet[3322]: E1123 22:58:32.242242 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.242448 kubelet[3322]: W1123 22:58:32.242439 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.242817 kubelet[3322]: E1123 22:58:32.242706 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.244949 kubelet[3322]: E1123 22:58:32.244741 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.244949 kubelet[3322]: W1123 22:58:32.244942 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.245699 kubelet[3322]: E1123 22:58:32.245144 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.246223 kubelet[3322]: E1123 22:58:32.246176 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.246223 kubelet[3322]: W1123 22:58:32.246211 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.247055 kubelet[3322]: E1123 22:58:32.246805 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.247702 kubelet[3322]: E1123 22:58:32.247654 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.247702 kubelet[3322]: W1123 22:58:32.247690 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.248318 kubelet[3322]: E1123 22:58:32.248068 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.248318 kubelet[3322]: I1123 22:58:32.248146 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b6239d0a-f247-4ff7-8f39-2d2983756ead-registration-dir\") pod \"csi-node-driver-rz2c9\" (UID: \"b6239d0a-f247-4ff7-8f39-2d2983756ead\") " pod="calico-system/csi-node-driver-rz2c9" Nov 23 22:58:32.251700 kubelet[3322]: E1123 22:58:32.251069 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.251700 kubelet[3322]: W1123 22:58:32.251411 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.251700 kubelet[3322]: E1123 22:58:32.251461 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.255818 kubelet[3322]: E1123 22:58:32.255158 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.255818 kubelet[3322]: W1123 22:58:32.255193 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.255818 kubelet[3322]: E1123 22:58:32.255284 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.258011 kubelet[3322]: E1123 22:58:32.257282 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.258011 kubelet[3322]: W1123 22:58:32.257320 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.258011 kubelet[3322]: E1123 22:58:32.257352 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.258011 kubelet[3322]: I1123 22:58:32.257411 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b6239d0a-f247-4ff7-8f39-2d2983756ead-socket-dir\") pod \"csi-node-driver-rz2c9\" (UID: \"b6239d0a-f247-4ff7-8f39-2d2983756ead\") " pod="calico-system/csi-node-driver-rz2c9" Nov 23 22:58:32.261961 kubelet[3322]: E1123 22:58:32.261360 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.261961 kubelet[3322]: W1123 22:58:32.261398 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.261961 kubelet[3322]: E1123 22:58:32.261580 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.261961 kubelet[3322]: I1123 22:58:32.261636 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b6239d0a-f247-4ff7-8f39-2d2983756ead-varrun\") pod \"csi-node-driver-rz2c9\" (UID: \"b6239d0a-f247-4ff7-8f39-2d2983756ead\") " pod="calico-system/csi-node-driver-rz2c9" Nov 23 22:58:32.266298 kubelet[3322]: E1123 22:58:32.265365 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.266298 kubelet[3322]: W1123 22:58:32.265404 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.266298 kubelet[3322]: E1123 22:58:32.265476 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.267109 kubelet[3322]: E1123 22:58:32.267075 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.267281 kubelet[3322]: W1123 22:58:32.267230 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.267689 kubelet[3322]: E1123 22:58:32.267529 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.270596 kubelet[3322]: E1123 22:58:32.270522 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.270596 kubelet[3322]: W1123 22:58:32.270558 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.270596 kubelet[3322]: E1123 22:58:32.270590 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.271555 kubelet[3322]: E1123 22:58:32.270980 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.271555 kubelet[3322]: W1123 22:58:32.271009 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.271555 kubelet[3322]: E1123 22:58:32.271033 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.344220 systemd[1]: Started cri-containerd-9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f.scope - libcontainer container 9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f. Nov 23 22:58:32.363875 kubelet[3322]: E1123 22:58:32.362913 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.363875 kubelet[3322]: W1123 22:58:32.363236 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.363875 kubelet[3322]: E1123 22:58:32.363321 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.366081 kubelet[3322]: E1123 22:58:32.365595 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.366081 kubelet[3322]: W1123 22:58:32.365795 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.366081 kubelet[3322]: E1123 22:58:32.365845 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.368366 kubelet[3322]: E1123 22:58:32.367533 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.368366 kubelet[3322]: W1123 22:58:32.367587 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.368366 kubelet[3322]: E1123 22:58:32.367636 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.369201 kubelet[3322]: E1123 22:58:32.368684 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.369201 kubelet[3322]: W1123 22:58:32.368720 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.370098 kubelet[3322]: E1123 22:58:32.369737 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.370569 kubelet[3322]: E1123 22:58:32.370526 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.370569 kubelet[3322]: W1123 22:58:32.370562 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.371356 kubelet[3322]: E1123 22:58:32.370700 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.371791 kubelet[3322]: E1123 22:58:32.371750 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.371791 kubelet[3322]: W1123 22:58:32.371784 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.372029 kubelet[3322]: E1123 22:58:32.371916 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.373501 kubelet[3322]: E1123 22:58:32.373456 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.373501 kubelet[3322]: W1123 22:58:32.373492 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.375151 kubelet[3322]: E1123 22:58:32.374796 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.376593 kubelet[3322]: E1123 22:58:32.376541 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.376593 kubelet[3322]: W1123 22:58:32.376586 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.377074 kubelet[3322]: E1123 22:58:32.376908 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.377707 kubelet[3322]: E1123 22:58:32.377656 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.377707 kubelet[3322]: W1123 22:58:32.377693 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.377953 kubelet[3322]: E1123 22:58:32.377755 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.378692 kubelet[3322]: E1123 22:58:32.378639 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.378692 kubelet[3322]: W1123 22:58:32.378678 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.378993 kubelet[3322]: E1123 22:58:32.378922 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.380178 kubelet[3322]: E1123 22:58:32.379936 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.380178 kubelet[3322]: W1123 22:58:32.379970 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.381338 kubelet[3322]: E1123 22:58:32.380577 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.381717 kubelet[3322]: E1123 22:58:32.381687 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.382239 kubelet[3322]: W1123 22:58:32.381856 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.382239 kubelet[3322]: E1123 22:58:32.382008 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.383882 kubelet[3322]: E1123 22:58:32.382934 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.383882 kubelet[3322]: W1123 22:58:32.383159 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.383882 kubelet[3322]: E1123 22:58:32.383235 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.385188 kubelet[3322]: E1123 22:58:32.384559 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.385188 kubelet[3322]: W1123 22:58:32.384589 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.386834 kubelet[3322]: E1123 22:58:32.386797 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.387149 kubelet[3322]: W1123 22:58:32.386972 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.388622 kubelet[3322]: E1123 22:58:32.388300 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.388622 kubelet[3322]: W1123 22:58:32.388340 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.388622 kubelet[3322]: E1123 22:58:32.388576 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.388870 kubelet[3322]: E1123 22:58:32.388791 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.388870 kubelet[3322]: E1123 22:58:32.388817 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.389155 kubelet[3322]: E1123 22:58:32.389057 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.389155 kubelet[3322]: W1123 22:58:32.389079 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.389155 kubelet[3322]: E1123 22:58:32.389115 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.391629 kubelet[3322]: E1123 22:58:32.391574 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.391629 kubelet[3322]: W1123 22:58:32.391617 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.392121 kubelet[3322]: E1123 22:58:32.391666 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.392741 kubelet[3322]: E1123 22:58:32.392616 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.393007 kubelet[3322]: W1123 22:58:32.392798 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.393007 kubelet[3322]: E1123 22:58:32.392851 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.393862 kubelet[3322]: E1123 22:58:32.393644 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.393989 kubelet[3322]: W1123 22:58:32.393831 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.394296 kubelet[3322]: E1123 22:58:32.394160 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.394817 kubelet[3322]: E1123 22:58:32.394665 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.394900 kubelet[3322]: W1123 22:58:32.394810 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.394989 kubelet[3322]: E1123 22:58:32.394952 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.395514 kubelet[3322]: E1123 22:58:32.395476 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.395514 kubelet[3322]: W1123 22:58:32.395508 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.395678 kubelet[3322]: E1123 22:58:32.395578 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.397019 kubelet[3322]: E1123 22:58:32.396964 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.397019 kubelet[3322]: W1123 22:58:32.397004 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.397216 kubelet[3322]: E1123 22:58:32.397052 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.398323 kubelet[3322]: E1123 22:58:32.398204 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.398323 kubelet[3322]: W1123 22:58:32.398240 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.399039 kubelet[3322]: E1123 22:58:32.398989 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.401006 kubelet[3322]: E1123 22:58:32.400935 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.401006 kubelet[3322]: W1123 22:58:32.400979 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.401006 kubelet[3322]: E1123 22:58:32.401012 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.425294 kubelet[3322]: E1123 22:58:32.423422 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:32.425294 kubelet[3322]: W1123 22:58:32.423462 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:32.425294 kubelet[3322]: E1123 22:58:32.423498 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:32.476607 containerd[2005]: time="2025-11-23T22:58:32.476541193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7864c7f867-n8d2k,Uid:a4758881-5b90-4144-9163-99485c12391a,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce7f38ccca2eab55073f7cbd27db00e4984bcecdf22d26a444f2cb5325570b0a\"" Nov 23 22:58:32.483634 containerd[2005]: time="2025-11-23T22:58:32.483586429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 22:58:32.487280 containerd[2005]: time="2025-11-23T22:58:32.487122625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zmgh5,Uid:66a65c3c-42c4-4308-8b3c-7b79162ed287,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f\"" Nov 23 22:58:33.641780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505420288.mount: Deactivated successfully. Nov 23 22:58:33.746239 kubelet[3322]: E1123 22:58:33.745245 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:58:34.428645 containerd[2005]: time="2025-11-23T22:58:34.428577447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:34.430228 containerd[2005]: time="2025-11-23T22:58:34.429893763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 22:58:34.431311 containerd[2005]: time="2025-11-23T22:58:34.431232411Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:34.434579 containerd[2005]: time="2025-11-23T22:58:34.434530299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:34.435780 containerd[2005]: time="2025-11-23T22:58:34.435726747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.951587958s" Nov 23 22:58:34.435985 containerd[2005]: time="2025-11-23T22:58:34.435779871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 22:58:34.439673 containerd[2005]: time="2025-11-23T22:58:34.439595091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 22:58:34.475480 containerd[2005]: time="2025-11-23T22:58:34.474996195Z" level=info msg="CreateContainer within sandbox \"ce7f38ccca2eab55073f7cbd27db00e4984bcecdf22d26a444f2cb5325570b0a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 22:58:34.486714 containerd[2005]: time="2025-11-23T22:58:34.486640131Z" level=info msg="Container 0cbe1bbf39a3b9ce1f3050495c1e6937b94baf5ceb664c9a2ad89949326dd8e5: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:34.495446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3452371353.mount: Deactivated successfully. Nov 23 22:58:34.507727 containerd[2005]: time="2025-11-23T22:58:34.507677643Z" level=info msg="CreateContainer within sandbox \"ce7f38ccca2eab55073f7cbd27db00e4984bcecdf22d26a444f2cb5325570b0a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0cbe1bbf39a3b9ce1f3050495c1e6937b94baf5ceb664c9a2ad89949326dd8e5\"" Nov 23 22:58:34.509280 containerd[2005]: time="2025-11-23T22:58:34.509149491Z" level=info msg="StartContainer for \"0cbe1bbf39a3b9ce1f3050495c1e6937b94baf5ceb664c9a2ad89949326dd8e5\"" Nov 23 22:58:34.512430 containerd[2005]: time="2025-11-23T22:58:34.512337747Z" level=info msg="connecting to shim 0cbe1bbf39a3b9ce1f3050495c1e6937b94baf5ceb664c9a2ad89949326dd8e5" address="unix:///run/containerd/s/33a13a29d0a724bd2c581681cbfeab357605a587fe3af6dd5c617f95ff3ebda6" protocol=ttrpc version=3 Nov 23 22:58:34.552575 systemd[1]: Started cri-containerd-0cbe1bbf39a3b9ce1f3050495c1e6937b94baf5ceb664c9a2ad89949326dd8e5.scope - libcontainer container 0cbe1bbf39a3b9ce1f3050495c1e6937b94baf5ceb664c9a2ad89949326dd8e5. Nov 23 22:58:34.648011 containerd[2005]: time="2025-11-23T22:58:34.646927084Z" level=info msg="StartContainer for \"0cbe1bbf39a3b9ce1f3050495c1e6937b94baf5ceb664c9a2ad89949326dd8e5\" returns successfully" Nov 23 22:58:35.049553 kubelet[3322]: E1123 22:58:35.049462 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.049553 kubelet[3322]: W1123 22:58:35.049530 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.050342 kubelet[3322]: E1123 22:58:35.049586 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.051133 kubelet[3322]: E1123 22:58:35.051077 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.052703 kubelet[3322]: W1123 22:58:35.051121 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.052886 kubelet[3322]: E1123 22:58:35.052716 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.053766 kubelet[3322]: E1123 22:58:35.053690 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.053766 kubelet[3322]: W1123 22:58:35.053727 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.053937 kubelet[3322]: E1123 22:58:35.053794 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.054981 kubelet[3322]: E1123 22:58:35.054932 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.055346 kubelet[3322]: W1123 22:58:35.055297 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.055346 kubelet[3322]: E1123 22:58:35.055352 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.056565 kubelet[3322]: E1123 22:58:35.056515 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.056565 kubelet[3322]: W1123 22:58:35.056557 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.056784 kubelet[3322]: E1123 22:58:35.056589 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.057727 kubelet[3322]: E1123 22:58:35.057684 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.057727 kubelet[3322]: W1123 22:58:35.057720 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.057885 kubelet[3322]: E1123 22:58:35.057752 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.058907 kubelet[3322]: E1123 22:58:35.058797 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.058907 kubelet[3322]: W1123 22:58:35.058897 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.059152 kubelet[3322]: E1123 22:58:35.058929 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.060409 kubelet[3322]: E1123 22:58:35.060353 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.060874 kubelet[3322]: W1123 22:58:35.060397 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.060874 kubelet[3322]: E1123 22:58:35.060459 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.062362 kubelet[3322]: E1123 22:58:35.062309 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.062482 kubelet[3322]: W1123 22:58:35.062373 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.062482 kubelet[3322]: E1123 22:58:35.062408 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.064647 kubelet[3322]: E1123 22:58:35.064584 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.064647 kubelet[3322]: W1123 22:58:35.064630 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.064878 kubelet[3322]: E1123 22:58:35.064663 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.065101 kubelet[3322]: E1123 22:58:35.065057 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.065101 kubelet[3322]: W1123 22:58:35.065090 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.065209 kubelet[3322]: E1123 22:58:35.065114 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.065495 kubelet[3322]: E1123 22:58:35.065452 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.065495 kubelet[3322]: W1123 22:58:35.065487 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.065639 kubelet[3322]: E1123 22:58:35.065511 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.065865 kubelet[3322]: E1123 22:58:35.065827 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.065865 kubelet[3322]: W1123 22:58:35.065856 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.065964 kubelet[3322]: E1123 22:58:35.065877 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.066188 kubelet[3322]: E1123 22:58:35.066153 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.066188 kubelet[3322]: W1123 22:58:35.066180 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.066346 kubelet[3322]: E1123 22:58:35.066202 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.068351 kubelet[3322]: E1123 22:58:35.067821 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.068351 kubelet[3322]: W1123 22:58:35.067868 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.068351 kubelet[3322]: E1123 22:58:35.067900 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.103449 kubelet[3322]: E1123 22:58:35.103388 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.103449 kubelet[3322]: W1123 22:58:35.103435 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.103946 kubelet[3322]: E1123 22:58:35.103468 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.103946 kubelet[3322]: E1123 22:58:35.103902 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.103946 kubelet[3322]: W1123 22:58:35.103921 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.104336 kubelet[3322]: E1123 22:58:35.103967 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.104336 kubelet[3322]: E1123 22:58:35.104320 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.104465 kubelet[3322]: W1123 22:58:35.104339 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.104465 kubelet[3322]: E1123 22:58:35.104361 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.105398 kubelet[3322]: E1123 22:58:35.105349 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.105398 kubelet[3322]: W1123 22:58:35.105391 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.106675 kubelet[3322]: E1123 22:58:35.105435 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.106675 kubelet[3322]: E1123 22:58:35.105762 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.106675 kubelet[3322]: W1123 22:58:35.105778 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.106675 kubelet[3322]: E1123 22:58:35.105797 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.106675 kubelet[3322]: E1123 22:58:35.106113 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.106675 kubelet[3322]: W1123 22:58:35.106131 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.106675 kubelet[3322]: E1123 22:58:35.106151 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.107159 kubelet[3322]: E1123 22:58:35.107130 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.107313 kubelet[3322]: W1123 22:58:35.107284 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.108013 kubelet[3322]: E1123 22:58:35.107549 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.108013 kubelet[3322]: E1123 22:58:35.107938 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.108013 kubelet[3322]: W1123 22:58:35.107958 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.108013 kubelet[3322]: E1123 22:58:35.108001 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.108349 kubelet[3322]: E1123 22:58:35.108317 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.108349 kubelet[3322]: W1123 22:58:35.108344 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.109106 kubelet[3322]: E1123 22:58:35.108511 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.109388 kubelet[3322]: E1123 22:58:35.109344 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.109388 kubelet[3322]: W1123 22:58:35.109382 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.109528 kubelet[3322]: E1123 22:58:35.109501 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.110292 kubelet[3322]: E1123 22:58:35.109814 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.110292 kubelet[3322]: W1123 22:58:35.109845 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.110292 kubelet[3322]: E1123 22:58:35.109958 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.110292 kubelet[3322]: E1123 22:58:35.110207 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.110292 kubelet[3322]: W1123 22:58:35.110222 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.110292 kubelet[3322]: E1123 22:58:35.110248 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.110672 kubelet[3322]: E1123 22:58:35.110636 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.110672 kubelet[3322]: W1123 22:58:35.110665 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.110784 kubelet[3322]: E1123 22:58:35.110707 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.112411 kubelet[3322]: E1123 22:58:35.112020 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.112411 kubelet[3322]: W1123 22:58:35.112046 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.112411 kubelet[3322]: E1123 22:58:35.112079 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.113178 kubelet[3322]: E1123 22:58:35.112443 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.113178 kubelet[3322]: W1123 22:58:35.112462 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.113178 kubelet[3322]: E1123 22:58:35.112499 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.113178 kubelet[3322]: E1123 22:58:35.112818 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.113178 kubelet[3322]: W1123 22:58:35.112836 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.113178 kubelet[3322]: E1123 22:58:35.112869 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.113615 kubelet[3322]: E1123 22:58:35.113588 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.113958 kubelet[3322]: W1123 22:58:35.113713 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.114130 kubelet[3322]: E1123 22:58:35.114102 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.116772 kubelet[3322]: E1123 22:58:35.116528 3322 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:58:35.116772 kubelet[3322]: W1123 22:58:35.116564 3322 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:58:35.116772 kubelet[3322]: E1123 22:58:35.116594 3322 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:58:35.619215 containerd[2005]: time="2025-11-23T22:58:35.618531977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:35.620439 containerd[2005]: time="2025-11-23T22:58:35.620379149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 22:58:35.622414 containerd[2005]: time="2025-11-23T22:58:35.622353113Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:35.625945 containerd[2005]: time="2025-11-23T22:58:35.625868201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:35.627994 containerd[2005]: time="2025-11-23T22:58:35.627378041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.187701854s" Nov 23 22:58:35.627994 containerd[2005]: time="2025-11-23T22:58:35.627436973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 22:58:35.634198 containerd[2005]: time="2025-11-23T22:58:35.633240497Z" level=info msg="CreateContainer within sandbox \"9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 22:58:35.650438 containerd[2005]: time="2025-11-23T22:58:35.650383253Z" level=info msg="Container bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:35.659461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482786016.mount: Deactivated successfully. Nov 23 22:58:35.671462 containerd[2005]: time="2025-11-23T22:58:35.671237633Z" level=info msg="CreateContainer within sandbox \"9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf\"" Nov 23 22:58:35.674072 containerd[2005]: time="2025-11-23T22:58:35.673801289Z" level=info msg="StartContainer for \"bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf\"" Nov 23 22:58:35.679667 containerd[2005]: time="2025-11-23T22:58:35.679581449Z" level=info msg="connecting to shim bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf" address="unix:///run/containerd/s/dab2e52cbb8cbc34e7823325313afb9b21d4eaecf6fe00318d45f3d64da699d8" protocol=ttrpc version=3 Nov 23 22:58:35.719572 systemd[1]: Started cri-containerd-bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf.scope - libcontainer container bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf. Nov 23 22:58:35.745486 kubelet[3322]: E1123 22:58:35.745407 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:58:35.818585 containerd[2005]: time="2025-11-23T22:58:35.818525622Z" level=info msg="StartContainer for \"bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf\" returns successfully" Nov 23 22:58:35.852354 systemd[1]: cri-containerd-bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf.scope: Deactivated successfully. Nov 23 22:58:35.861010 containerd[2005]: time="2025-11-23T22:58:35.860947974Z" level=info msg="received container exit event container_id:\"bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf\" id:\"bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf\" pid:4282 exited_at:{seconds:1763938715 nanos:860111598}" Nov 23 22:58:35.906620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc1a248eca4c79e0ec9776c7d3d514d3f9d459852062b384c9cb3f767af118bf-rootfs.mount: Deactivated successfully. Nov 23 22:58:36.029461 kubelet[3322]: I1123 22:58:36.029241 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7864c7f867-n8d2k" podStartSLOduration=3.073436533 podStartE2EDuration="5.029173179s" podCreationTimestamp="2025-11-23 22:58:31 +0000 UTC" firstStartedPulling="2025-11-23 22:58:32.482028241 +0000 UTC m=+33.001104909" lastFinishedPulling="2025-11-23 22:58:34.437764887 +0000 UTC m=+34.956841555" observedRunningTime="2025-11-23 22:58:35.03136151 +0000 UTC m=+35.550438190" watchObservedRunningTime="2025-11-23 22:58:36.029173179 +0000 UTC m=+36.548249847" Nov 23 22:58:37.001561 containerd[2005]: time="2025-11-23T22:58:37.001492732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 22:58:37.746056 kubelet[3322]: E1123 22:58:37.745935 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:58:39.746634 kubelet[3322]: E1123 22:58:39.746579 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:58:39.903890 containerd[2005]: time="2025-11-23T22:58:39.902376898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:39.904969 containerd[2005]: time="2025-11-23T22:58:39.904921606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 22:58:39.907285 containerd[2005]: time="2025-11-23T22:58:39.907222666Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:39.911706 containerd[2005]: time="2025-11-23T22:58:39.911658730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:39.912898 containerd[2005]: time="2025-11-23T22:58:39.912842602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.91128237s" Nov 23 22:58:39.912997 containerd[2005]: time="2025-11-23T22:58:39.912899374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 22:58:39.919404 containerd[2005]: time="2025-11-23T22:58:39.919353082Z" level=info msg="CreateContainer within sandbox \"9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 22:58:39.941615 containerd[2005]: time="2025-11-23T22:58:39.941538754Z" level=info msg="Container d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:39.947867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629765681.mount: Deactivated successfully. Nov 23 22:58:39.964681 containerd[2005]: time="2025-11-23T22:58:39.964590094Z" level=info msg="CreateContainer within sandbox \"9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024\"" Nov 23 22:58:39.965557 containerd[2005]: time="2025-11-23T22:58:39.965428858Z" level=info msg="StartContainer for \"d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024\"" Nov 23 22:58:39.970449 containerd[2005]: time="2025-11-23T22:58:39.970328866Z" level=info msg="connecting to shim d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024" address="unix:///run/containerd/s/dab2e52cbb8cbc34e7823325313afb9b21d4eaecf6fe00318d45f3d64da699d8" protocol=ttrpc version=3 Nov 23 22:58:40.014650 systemd[1]: Started cri-containerd-d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024.scope - libcontainer container d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024. Nov 23 22:58:40.163653 containerd[2005]: time="2025-11-23T22:58:40.163588231Z" level=info msg="StartContainer for \"d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024\" returns successfully" Nov 23 22:58:41.116473 containerd[2005]: time="2025-11-23T22:58:41.116133428Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 22:58:41.121529 systemd[1]: cri-containerd-d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024.scope: Deactivated successfully. Nov 23 22:58:41.122065 systemd[1]: cri-containerd-d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024.scope: Consumed 917ms CPU time, 190.8M memory peak, 165.9M written to disk. Nov 23 22:58:41.130169 containerd[2005]: time="2025-11-23T22:58:41.130083956Z" level=info msg="received container exit event container_id:\"d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024\" id:\"d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024\" pid:4345 exited_at:{seconds:1763938721 nanos:129763676}" Nov 23 22:58:41.174381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d754f366a436ff0cc1e77ebcd5f1a6ecd503e13be04c466fa7a362121574a024-rootfs.mount: Deactivated successfully. Nov 23 22:58:41.179527 kubelet[3322]: I1123 22:58:41.179447 3322 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 22:58:41.278904 systemd[1]: Created slice kubepods-burstable-poddbdf1f8a_97eb_41f6_84e0_31293d800724.slice - libcontainer container kubepods-burstable-poddbdf1f8a_97eb_41f6_84e0_31293d800724.slice. Nov 23 22:58:41.342972 systemd[1]: Created slice kubepods-besteffort-podd24d7369_6494_4a66_8309_347720b5fc56.slice - libcontainer container kubepods-besteffort-podd24d7369_6494_4a66_8309_347720b5fc56.slice. Nov 23 22:58:41.365073 systemd[1]: Created slice kubepods-besteffort-pod328f5f71_5736_4873_add1_f3d5d3b7eef2.slice - libcontainer container kubepods-besteffort-pod328f5f71_5736_4873_add1_f3d5d3b7eef2.slice. Nov 23 22:58:41.366458 kubelet[3322]: I1123 22:58:41.366211 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/328f5f71-5736-4873-add1-f3d5d3b7eef2-goldmane-key-pair\") pod \"goldmane-666569f655-zrbmg\" (UID: \"328f5f71-5736-4873-add1-f3d5d3b7eef2\") " pod="calico-system/goldmane-666569f655-zrbmg" Nov 23 22:58:41.366458 kubelet[3322]: I1123 22:58:41.366306 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmrzt\" (UniqueName: \"kubernetes.io/projected/5a862233-e232-405f-aea9-b959cf926288-kube-api-access-rmrzt\") pod \"whisker-887fb797b-48g2p\" (UID: \"5a862233-e232-405f-aea9-b959cf926288\") " pod="calico-system/whisker-887fb797b-48g2p" Nov 23 22:58:41.366458 kubelet[3322]: I1123 22:58:41.366360 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ffj\" (UniqueName: \"kubernetes.io/projected/d24d7369-6494-4a66-8309-347720b5fc56-kube-api-access-x8ffj\") pod \"calico-apiserver-855476946d-znnxr\" (UID: \"d24d7369-6494-4a66-8309-347720b5fc56\") " pod="calico-apiserver/calico-apiserver-855476946d-znnxr" Nov 23 22:58:41.366458 kubelet[3322]: I1123 22:58:41.366402 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88tgl\" (UniqueName: \"kubernetes.io/projected/dbdf1f8a-97eb-41f6-84e0-31293d800724-kube-api-access-88tgl\") pod \"coredns-668d6bf9bc-f2mxq\" (UID: \"dbdf1f8a-97eb-41f6-84e0-31293d800724\") " pod="kube-system/coredns-668d6bf9bc-f2mxq" Nov 23 22:58:41.368091 kubelet[3322]: I1123 22:58:41.366461 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56da4e3d-05e9-4599-8060-52650f1b8e04-config-volume\") pod \"coredns-668d6bf9bc-hsvdw\" (UID: \"56da4e3d-05e9-4599-8060-52650f1b8e04\") " pod="kube-system/coredns-668d6bf9bc-hsvdw" Nov 23 22:58:41.368091 kubelet[3322]: I1123 22:58:41.366505 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efcb5707-de3f-40a1-84e7-2d29faf16856-tigera-ca-bundle\") pod \"calico-kube-controllers-5d46955649-8px8j\" (UID: \"efcb5707-de3f-40a1-84e7-2d29faf16856\") " pod="calico-system/calico-kube-controllers-5d46955649-8px8j" Nov 23 22:58:41.368091 kubelet[3322]: I1123 22:58:41.366547 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq7x9\" (UniqueName: \"kubernetes.io/projected/328f5f71-5736-4873-add1-f3d5d3b7eef2-kube-api-access-lq7x9\") pod \"goldmane-666569f655-zrbmg\" (UID: \"328f5f71-5736-4873-add1-f3d5d3b7eef2\") " pod="calico-system/goldmane-666569f655-zrbmg" Nov 23 22:58:41.368091 kubelet[3322]: I1123 22:58:41.366584 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbdf1f8a-97eb-41f6-84e0-31293d800724-config-volume\") pod \"coredns-668d6bf9bc-f2mxq\" (UID: \"dbdf1f8a-97eb-41f6-84e0-31293d800724\") " pod="kube-system/coredns-668d6bf9bc-f2mxq" Nov 23 22:58:41.368091 kubelet[3322]: I1123 22:58:41.366622 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqkhc\" (UniqueName: \"kubernetes.io/projected/ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef-kube-api-access-tqkhc\") pod \"calico-apiserver-855476946d-hc826\" (UID: \"ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef\") " pod="calico-apiserver/calico-apiserver-855476946d-hc826" Nov 23 22:58:41.368484 kubelet[3322]: I1123 22:58:41.366663 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/328f5f71-5736-4873-add1-f3d5d3b7eef2-config\") pod \"goldmane-666569f655-zrbmg\" (UID: \"328f5f71-5736-4873-add1-f3d5d3b7eef2\") " pod="calico-system/goldmane-666569f655-zrbmg" Nov 23 22:58:41.368484 kubelet[3322]: I1123 22:58:41.366704 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/328f5f71-5736-4873-add1-f3d5d3b7eef2-goldmane-ca-bundle\") pod \"goldmane-666569f655-zrbmg\" (UID: \"328f5f71-5736-4873-add1-f3d5d3b7eef2\") " pod="calico-system/goldmane-666569f655-zrbmg" Nov 23 22:58:41.368484 kubelet[3322]: I1123 22:58:41.366738 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a862233-e232-405f-aea9-b959cf926288-whisker-ca-bundle\") pod \"whisker-887fb797b-48g2p\" (UID: \"5a862233-e232-405f-aea9-b959cf926288\") " pod="calico-system/whisker-887fb797b-48g2p" Nov 23 22:58:41.368484 kubelet[3322]: I1123 22:58:41.366793 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7pz\" (UniqueName: \"kubernetes.io/projected/efcb5707-de3f-40a1-84e7-2d29faf16856-kube-api-access-cf7pz\") pod \"calico-kube-controllers-5d46955649-8px8j\" (UID: \"efcb5707-de3f-40a1-84e7-2d29faf16856\") " pod="calico-system/calico-kube-controllers-5d46955649-8px8j" Nov 23 22:58:41.368484 kubelet[3322]: I1123 22:58:41.366829 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef-calico-apiserver-certs\") pod \"calico-apiserver-855476946d-hc826\" (UID: \"ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef\") " pod="calico-apiserver/calico-apiserver-855476946d-hc826" Nov 23 22:58:41.368763 kubelet[3322]: I1123 22:58:41.366869 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5a862233-e232-405f-aea9-b959cf926288-whisker-backend-key-pair\") pod \"whisker-887fb797b-48g2p\" (UID: \"5a862233-e232-405f-aea9-b959cf926288\") " pod="calico-system/whisker-887fb797b-48g2p" Nov 23 22:58:41.368763 kubelet[3322]: I1123 22:58:41.366908 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-842pl\" (UniqueName: \"kubernetes.io/projected/56da4e3d-05e9-4599-8060-52650f1b8e04-kube-api-access-842pl\") pod \"coredns-668d6bf9bc-hsvdw\" (UID: \"56da4e3d-05e9-4599-8060-52650f1b8e04\") " pod="kube-system/coredns-668d6bf9bc-hsvdw" Nov 23 22:58:41.368763 kubelet[3322]: I1123 22:58:41.366947 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d24d7369-6494-4a66-8309-347720b5fc56-calico-apiserver-certs\") pod \"calico-apiserver-855476946d-znnxr\" (UID: \"d24d7369-6494-4a66-8309-347720b5fc56\") " pod="calico-apiserver/calico-apiserver-855476946d-znnxr" Nov 23 22:58:41.388787 systemd[1]: Created slice kubepods-burstable-pod56da4e3d_05e9_4599_8060_52650f1b8e04.slice - libcontainer container kubepods-burstable-pod56da4e3d_05e9_4599_8060_52650f1b8e04.slice. Nov 23 22:58:41.410048 systemd[1]: Created slice kubepods-besteffort-pod5a862233_e232_405f_aea9_b959cf926288.slice - libcontainer container kubepods-besteffort-pod5a862233_e232_405f_aea9_b959cf926288.slice. Nov 23 22:58:41.430420 systemd[1]: Created slice kubepods-besteffort-podefcb5707_de3f_40a1_84e7_2d29faf16856.slice - libcontainer container kubepods-besteffort-podefcb5707_de3f_40a1_84e7_2d29faf16856.slice. Nov 23 22:58:41.448441 systemd[1]: Created slice kubepods-besteffort-podebeea6c8_b9a6_4d9a_a1c8_ed3aa29510ef.slice - libcontainer container kubepods-besteffort-podebeea6c8_b9a6_4d9a_a1c8_ed3aa29510ef.slice. Nov 23 22:58:41.664039 containerd[2005]: time="2025-11-23T22:58:41.663877871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855476946d-znnxr,Uid:d24d7369-6494-4a66-8309-347720b5fc56,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:58:41.678746 containerd[2005]: time="2025-11-23T22:58:41.678617855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zrbmg,Uid:328f5f71-5736-4873-add1-f3d5d3b7eef2,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:41.703458 containerd[2005]: time="2025-11-23T22:58:41.703344695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hsvdw,Uid:56da4e3d-05e9-4599-8060-52650f1b8e04,Namespace:kube-system,Attempt:0,}" Nov 23 22:58:41.721168 containerd[2005]: time="2025-11-23T22:58:41.721104983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-887fb797b-48g2p,Uid:5a862233-e232-405f-aea9-b959cf926288,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:41.743459 containerd[2005]: time="2025-11-23T22:58:41.743175371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d46955649-8px8j,Uid:efcb5707-de3f-40a1-84e7-2d29faf16856,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:41.759476 containerd[2005]: time="2025-11-23T22:58:41.758476343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855476946d-hc826,Uid:ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:58:41.771313 systemd[1]: Created slice kubepods-besteffort-podb6239d0a_f247_4ff7_8f39_2d2983756ead.slice - libcontainer container kubepods-besteffort-podb6239d0a_f247_4ff7_8f39_2d2983756ead.slice. Nov 23 22:58:41.786590 containerd[2005]: time="2025-11-23T22:58:41.786508199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rz2c9,Uid:b6239d0a-f247-4ff7-8f39-2d2983756ead,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:41.928823 containerd[2005]: time="2025-11-23T22:58:41.927714720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f2mxq,Uid:dbdf1f8a-97eb-41f6-84e0-31293d800724,Namespace:kube-system,Attempt:0,}" Nov 23 22:58:42.089810 containerd[2005]: time="2025-11-23T22:58:42.089744109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 22:58:42.296308 containerd[2005]: time="2025-11-23T22:58:42.295219762Z" level=error msg="Failed to destroy network for sandbox \"349c48d331fbe1bf0d198355bdfe9ad79b4e4eb398d1241cf406c9353522839e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.304471 systemd[1]: run-netns-cni\x2dc55a52a8\x2d3812\x2d2172\x2da11a\x2dae43067be445.mount: Deactivated successfully. Nov 23 22:58:42.311492 containerd[2005]: time="2025-11-23T22:58:42.311401546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855476946d-znnxr,Uid:d24d7369-6494-4a66-8309-347720b5fc56,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"349c48d331fbe1bf0d198355bdfe9ad79b4e4eb398d1241cf406c9353522839e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.312408 kubelet[3322]: E1123 22:58:42.311919 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"349c48d331fbe1bf0d198355bdfe9ad79b4e4eb398d1241cf406c9353522839e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.312408 kubelet[3322]: E1123 22:58:42.312035 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"349c48d331fbe1bf0d198355bdfe9ad79b4e4eb398d1241cf406c9353522839e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" Nov 23 22:58:42.312408 kubelet[3322]: E1123 22:58:42.312069 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"349c48d331fbe1bf0d198355bdfe9ad79b4e4eb398d1241cf406c9353522839e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" Nov 23 22:58:42.314198 kubelet[3322]: E1123 22:58:42.312131 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-855476946d-znnxr_calico-apiserver(d24d7369-6494-4a66-8309-347720b5fc56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-855476946d-znnxr_calico-apiserver(d24d7369-6494-4a66-8309-347720b5fc56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"349c48d331fbe1bf0d198355bdfe9ad79b4e4eb398d1241cf406c9353522839e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 22:58:42.355910 containerd[2005]: time="2025-11-23T22:58:42.355732162Z" level=error msg="Failed to destroy network for sandbox \"70509e23d0b5d4fca8fbd784a6d825cf23894869cf7283fcc18e43526e2f5a79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.361512 containerd[2005]: time="2025-11-23T22:58:42.361438966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hsvdw,Uid:56da4e3d-05e9-4599-8060-52650f1b8e04,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70509e23d0b5d4fca8fbd784a6d825cf23894869cf7283fcc18e43526e2f5a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.362696 kubelet[3322]: E1123 22:58:42.362055 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70509e23d0b5d4fca8fbd784a6d825cf23894869cf7283fcc18e43526e2f5a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.362696 kubelet[3322]: E1123 22:58:42.362152 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70509e23d0b5d4fca8fbd784a6d825cf23894869cf7283fcc18e43526e2f5a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hsvdw" Nov 23 22:58:42.362696 kubelet[3322]: E1123 22:58:42.362191 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70509e23d0b5d4fca8fbd784a6d825cf23894869cf7283fcc18e43526e2f5a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hsvdw" Nov 23 22:58:42.362939 kubelet[3322]: E1123 22:58:42.362323 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hsvdw_kube-system(56da4e3d-05e9-4599-8060-52650f1b8e04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hsvdw_kube-system(56da4e3d-05e9-4599-8060-52650f1b8e04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70509e23d0b5d4fca8fbd784a6d825cf23894869cf7283fcc18e43526e2f5a79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hsvdw" podUID="56da4e3d-05e9-4599-8060-52650f1b8e04" Nov 23 22:58:42.364415 containerd[2005]: time="2025-11-23T22:58:42.364332130Z" level=error msg="Failed to destroy network for sandbox \"7bfe855b72ec3db430c68346bd2cf6f0b41458a2a33b792db4d3c3bc23d37389\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.366980 systemd[1]: run-netns-cni\x2d14ea75b3\x2d1362\x2d713f\x2d8234\x2df94b5a9b7b06.mount: Deactivated successfully. Nov 23 22:58:42.375754 containerd[2005]: time="2025-11-23T22:58:42.375691798Z" level=error msg="Failed to destroy network for sandbox \"c125084143915acfa21aceaccfd350790131998a68dadefa3b12ffaffef8e29d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.379627 containerd[2005]: time="2025-11-23T22:58:42.379555042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zrbmg,Uid:328f5f71-5736-4873-add1-f3d5d3b7eef2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfe855b72ec3db430c68346bd2cf6f0b41458a2a33b792db4d3c3bc23d37389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.380395 systemd[1]: run-netns-cni\x2d79183aa9\x2d37ee\x2dea77\x2d3ade\x2d0383debfb887.mount: Deactivated successfully. Nov 23 22:58:42.383318 kubelet[3322]: E1123 22:58:42.382125 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfe855b72ec3db430c68346bd2cf6f0b41458a2a33b792db4d3c3bc23d37389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.383318 kubelet[3322]: E1123 22:58:42.382230 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfe855b72ec3db430c68346bd2cf6f0b41458a2a33b792db4d3c3bc23d37389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zrbmg" Nov 23 22:58:42.383318 kubelet[3322]: E1123 22:58:42.382303 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bfe855b72ec3db430c68346bd2cf6f0b41458a2a33b792db4d3c3bc23d37389\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zrbmg" Nov 23 22:58:42.387041 kubelet[3322]: E1123 22:58:42.382401 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-zrbmg_calico-system(328f5f71-5736-4873-add1-f3d5d3b7eef2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-zrbmg_calico-system(328f5f71-5736-4873-add1-f3d5d3b7eef2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bfe855b72ec3db430c68346bd2cf6f0b41458a2a33b792db4d3c3bc23d37389\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 22:58:42.390630 systemd[1]: run-netns-cni\x2dfde21f30\x2d57cd\x2d8fc3\x2d17d8\x2d68769da4a0e9.mount: Deactivated successfully. Nov 23 22:58:42.391955 containerd[2005]: time="2025-11-23T22:58:42.391768570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855476946d-hc826,Uid:ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c125084143915acfa21aceaccfd350790131998a68dadefa3b12ffaffef8e29d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.396414 kubelet[3322]: E1123 22:58:42.392600 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c125084143915acfa21aceaccfd350790131998a68dadefa3b12ffaffef8e29d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.396414 kubelet[3322]: E1123 22:58:42.394842 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c125084143915acfa21aceaccfd350790131998a68dadefa3b12ffaffef8e29d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855476946d-hc826" Nov 23 22:58:42.396414 kubelet[3322]: E1123 22:58:42.394880 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c125084143915acfa21aceaccfd350790131998a68dadefa3b12ffaffef8e29d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-855476946d-hc826" Nov 23 22:58:42.396705 kubelet[3322]: E1123 22:58:42.394947 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-855476946d-hc826_calico-apiserver(ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-855476946d-hc826_calico-apiserver(ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c125084143915acfa21aceaccfd350790131998a68dadefa3b12ffaffef8e29d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 22:58:42.409608 containerd[2005]: time="2025-11-23T22:58:42.409382578Z" level=error msg="Failed to destroy network for sandbox \"52ce2a148dc499506120243887a80c2ffcc2787b831e9b077f156d76638a93a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.413493 containerd[2005]: time="2025-11-23T22:58:42.413379982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rz2c9,Uid:b6239d0a-f247-4ff7-8f39-2d2983756ead,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ce2a148dc499506120243887a80c2ffcc2787b831e9b077f156d76638a93a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.415507 kubelet[3322]: E1123 22:58:42.413714 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ce2a148dc499506120243887a80c2ffcc2787b831e9b077f156d76638a93a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.415507 kubelet[3322]: E1123 22:58:42.413790 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ce2a148dc499506120243887a80c2ffcc2787b831e9b077f156d76638a93a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rz2c9" Nov 23 22:58:42.415507 kubelet[3322]: E1123 22:58:42.413827 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52ce2a148dc499506120243887a80c2ffcc2787b831e9b077f156d76638a93a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rz2c9" Nov 23 22:58:42.417548 kubelet[3322]: E1123 22:58:42.413904 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52ce2a148dc499506120243887a80c2ffcc2787b831e9b077f156d76638a93a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:58:42.422583 containerd[2005]: time="2025-11-23T22:58:42.422311090Z" level=error msg="Failed to destroy network for sandbox \"4d2e448812bb675a29bf06822d73ac4e55185da690c5057dc8071b03388bc6b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.428189 containerd[2005]: time="2025-11-23T22:58:42.428066158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-887fb797b-48g2p,Uid:5a862233-e232-405f-aea9-b959cf926288,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2e448812bb675a29bf06822d73ac4e55185da690c5057dc8071b03388bc6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.431326 kubelet[3322]: E1123 22:58:42.430138 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2e448812bb675a29bf06822d73ac4e55185da690c5057dc8071b03388bc6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.431326 kubelet[3322]: E1123 22:58:42.430221 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2e448812bb675a29bf06822d73ac4e55185da690c5057dc8071b03388bc6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-887fb797b-48g2p" Nov 23 22:58:42.431326 kubelet[3322]: E1123 22:58:42.430277 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d2e448812bb675a29bf06822d73ac4e55185da690c5057dc8071b03388bc6b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-887fb797b-48g2p" Nov 23 22:58:42.431600 kubelet[3322]: E1123 22:58:42.430346 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-887fb797b-48g2p_calico-system(5a862233-e232-405f-aea9-b959cf926288)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-887fb797b-48g2p_calico-system(5a862233-e232-405f-aea9-b959cf926288)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d2e448812bb675a29bf06822d73ac4e55185da690c5057dc8071b03388bc6b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-887fb797b-48g2p" podUID="5a862233-e232-405f-aea9-b959cf926288" Nov 23 22:58:42.432323 containerd[2005]: time="2025-11-23T22:58:42.431873326Z" level=error msg="Failed to destroy network for sandbox \"8700463a4e3bc7c50213b7f2fff04060fde00e4e2524bb565b1ec697633fb01a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.436862 containerd[2005]: time="2025-11-23T22:58:42.436683371Z" level=error msg="Failed to destroy network for sandbox \"7f0308c564e3d21c7a110cd2ab8e1b618542b0f52484d14deb5d255b262674e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.437099 containerd[2005]: time="2025-11-23T22:58:42.437054723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d46955649-8px8j,Uid:efcb5707-de3f-40a1-84e7-2d29faf16856,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8700463a4e3bc7c50213b7f2fff04060fde00e4e2524bb565b1ec697633fb01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.437890 kubelet[3322]: E1123 22:58:42.437789 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8700463a4e3bc7c50213b7f2fff04060fde00e4e2524bb565b1ec697633fb01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.438014 kubelet[3322]: E1123 22:58:42.437948 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8700463a4e3bc7c50213b7f2fff04060fde00e4e2524bb565b1ec697633fb01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" Nov 23 22:58:42.438103 kubelet[3322]: E1123 22:58:42.437984 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8700463a4e3bc7c50213b7f2fff04060fde00e4e2524bb565b1ec697633fb01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" Nov 23 22:58:42.438160 kubelet[3322]: E1123 22:58:42.438127 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d46955649-8px8j_calico-system(efcb5707-de3f-40a1-84e7-2d29faf16856)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d46955649-8px8j_calico-system(efcb5707-de3f-40a1-84e7-2d29faf16856)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8700463a4e3bc7c50213b7f2fff04060fde00e4e2524bb565b1ec697633fb01a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 22:58:42.450104 containerd[2005]: time="2025-11-23T22:58:42.449912915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f2mxq,Uid:dbdf1f8a-97eb-41f6-84e0-31293d800724,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0308c564e3d21c7a110cd2ab8e1b618542b0f52484d14deb5d255b262674e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.450467 kubelet[3322]: E1123 22:58:42.450351 3322 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0308c564e3d21c7a110cd2ab8e1b618542b0f52484d14deb5d255b262674e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:58:42.450576 kubelet[3322]: E1123 22:58:42.450501 3322 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0308c564e3d21c7a110cd2ab8e1b618542b0f52484d14deb5d255b262674e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-f2mxq" Nov 23 22:58:42.450576 kubelet[3322]: E1123 22:58:42.450560 3322 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0308c564e3d21c7a110cd2ab8e1b618542b0f52484d14deb5d255b262674e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-f2mxq" Nov 23 22:58:42.450819 kubelet[3322]: E1123 22:58:42.450758 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-f2mxq_kube-system(dbdf1f8a-97eb-41f6-84e0-31293d800724)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-f2mxq_kube-system(dbdf1f8a-97eb-41f6-84e0-31293d800724)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f0308c564e3d21c7a110cd2ab8e1b618542b0f52484d14deb5d255b262674e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-f2mxq" podUID="dbdf1f8a-97eb-41f6-84e0-31293d800724" Nov 23 22:58:43.174724 systemd[1]: run-netns-cni\x2db76cfabf\x2d2318\x2d023c\x2d18f6\x2dc62db5bf93ce.mount: Deactivated successfully. Nov 23 22:58:43.174904 systemd[1]: run-netns-cni\x2d36af5e3e\x2dd1b0\x2d2e19\x2de1fb\x2d97cd54529313.mount: Deactivated successfully. Nov 23 22:58:43.175037 systemd[1]: run-netns-cni\x2d59d9ff0e\x2d3e90\x2d3d3a\x2dbbe0\x2ddacfdc20f702.mount: Deactivated successfully. Nov 23 22:58:43.175176 systemd[1]: run-netns-cni\x2d1efccac8\x2dbb73\x2d4954\x2d8019\x2d4c36c5465068.mount: Deactivated successfully. Nov 23 22:58:48.205229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803162301.mount: Deactivated successfully. Nov 23 22:58:48.260831 containerd[2005]: time="2025-11-23T22:58:48.260751627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:48.263355 containerd[2005]: time="2025-11-23T22:58:48.263227167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 22:58:48.265414 containerd[2005]: time="2025-11-23T22:58:48.265356459Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:48.270724 containerd[2005]: time="2025-11-23T22:58:48.270652971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:48.273006 containerd[2005]: time="2025-11-23T22:58:48.272940807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.183103518s" Nov 23 22:58:48.273006 containerd[2005]: time="2025-11-23T22:58:48.272995803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 22:58:48.301801 containerd[2005]: time="2025-11-23T22:58:48.301744864Z" level=info msg="CreateContainer within sandbox \"9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 22:58:48.328285 containerd[2005]: time="2025-11-23T22:58:48.327947104Z" level=info msg="Container 08f207e04b57b18e4b0deedf646195b74393407b7cd9d9a268cfec712c886643: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:48.361287 containerd[2005]: time="2025-11-23T22:58:48.360937384Z" level=info msg="CreateContainer within sandbox \"9b65c50962858df8c78388f4d2a963c9440da15aa32e2fc969899c01aeb8e75f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"08f207e04b57b18e4b0deedf646195b74393407b7cd9d9a268cfec712c886643\"" Nov 23 22:58:48.363988 containerd[2005]: time="2025-11-23T22:58:48.363753916Z" level=info msg="StartContainer for \"08f207e04b57b18e4b0deedf646195b74393407b7cd9d9a268cfec712c886643\"" Nov 23 22:58:48.368356 containerd[2005]: time="2025-11-23T22:58:48.368281096Z" level=info msg="connecting to shim 08f207e04b57b18e4b0deedf646195b74393407b7cd9d9a268cfec712c886643" address="unix:///run/containerd/s/dab2e52cbb8cbc34e7823325313afb9b21d4eaecf6fe00318d45f3d64da699d8" protocol=ttrpc version=3 Nov 23 22:58:48.407587 systemd[1]: Started cri-containerd-08f207e04b57b18e4b0deedf646195b74393407b7cd9d9a268cfec712c886643.scope - libcontainer container 08f207e04b57b18e4b0deedf646195b74393407b7cd9d9a268cfec712c886643. Nov 23 22:58:48.529357 containerd[2005]: time="2025-11-23T22:58:48.529047953Z" level=info msg="StartContainer for \"08f207e04b57b18e4b0deedf646195b74393407b7cd9d9a268cfec712c886643\" returns successfully" Nov 23 22:58:48.791765 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 22:58:48.791992 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 22:58:49.156667 kubelet[3322]: I1123 22:58:49.156107 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zmgh5" podStartSLOduration=2.373498574 podStartE2EDuration="18.15607366s" podCreationTimestamp="2025-11-23 22:58:31 +0000 UTC" firstStartedPulling="2025-11-23 22:58:32.491573209 +0000 UTC m=+33.010649877" lastFinishedPulling="2025-11-23 22:58:48.274148295 +0000 UTC m=+48.793224963" observedRunningTime="2025-11-23 22:58:49.153200368 +0000 UTC m=+49.672277072" watchObservedRunningTime="2025-11-23 22:58:49.15607366 +0000 UTC m=+49.675150316" Nov 23 22:58:49.160555 kubelet[3322]: I1123 22:58:49.159794 3322 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmrzt\" (UniqueName: \"kubernetes.io/projected/5a862233-e232-405f-aea9-b959cf926288-kube-api-access-rmrzt\") pod \"5a862233-e232-405f-aea9-b959cf926288\" (UID: \"5a862233-e232-405f-aea9-b959cf926288\") " Nov 23 22:58:49.160555 kubelet[3322]: I1123 22:58:49.159874 3322 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a862233-e232-405f-aea9-b959cf926288-whisker-ca-bundle\") pod \"5a862233-e232-405f-aea9-b959cf926288\" (UID: \"5a862233-e232-405f-aea9-b959cf926288\") " Nov 23 22:58:49.160555 kubelet[3322]: I1123 22:58:49.159926 3322 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5a862233-e232-405f-aea9-b959cf926288-whisker-backend-key-pair\") pod \"5a862233-e232-405f-aea9-b959cf926288\" (UID: \"5a862233-e232-405f-aea9-b959cf926288\") " Nov 23 22:58:49.170655 kubelet[3322]: I1123 22:58:49.170193 3322 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a862233-e232-405f-aea9-b959cf926288-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5a862233-e232-405f-aea9-b959cf926288" (UID: "5a862233-e232-405f-aea9-b959cf926288"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 22:58:49.176190 kubelet[3322]: I1123 22:58:49.176131 3322 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a862233-e232-405f-aea9-b959cf926288-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5a862233-e232-405f-aea9-b959cf926288" (UID: "5a862233-e232-405f-aea9-b959cf926288"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 22:58:49.179929 kubelet[3322]: I1123 22:58:49.179437 3322 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a862233-e232-405f-aea9-b959cf926288-kube-api-access-rmrzt" (OuterVolumeSpecName: "kube-api-access-rmrzt") pod "5a862233-e232-405f-aea9-b959cf926288" (UID: "5a862233-e232-405f-aea9-b959cf926288"). InnerVolumeSpecName "kube-api-access-rmrzt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 22:58:49.209083 systemd[1]: var-lib-kubelet-pods-5a862233\x2de232\x2d405f\x2daea9\x2db959cf926288-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drmrzt.mount: Deactivated successfully. Nov 23 22:58:49.212406 systemd[1]: var-lib-kubelet-pods-5a862233\x2de232\x2d405f\x2daea9\x2db959cf926288-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 22:58:49.270176 kubelet[3322]: I1123 22:58:49.270107 3322 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmrzt\" (UniqueName: \"kubernetes.io/projected/5a862233-e232-405f-aea9-b959cf926288-kube-api-access-rmrzt\") on node \"ip-172-31-17-147\" DevicePath \"\"" Nov 23 22:58:49.270346 kubelet[3322]: I1123 22:58:49.270219 3322 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a862233-e232-405f-aea9-b959cf926288-whisker-ca-bundle\") on node \"ip-172-31-17-147\" DevicePath \"\"" Nov 23 22:58:49.270346 kubelet[3322]: I1123 22:58:49.270287 3322 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5a862233-e232-405f-aea9-b959cf926288-whisker-backend-key-pair\") on node \"ip-172-31-17-147\" DevicePath \"\"" Nov 23 22:58:49.444666 systemd[1]: Removed slice kubepods-besteffort-pod5a862233_e232_405f_aea9_b959cf926288.slice - libcontainer container kubepods-besteffort-pod5a862233_e232_405f_aea9_b959cf926288.slice. Nov 23 22:58:49.580545 systemd[1]: Created slice kubepods-besteffort-pod30c50e65_a97a_4ae6_b165_6f81318bd6a7.slice - libcontainer container kubepods-besteffort-pod30c50e65_a97a_4ae6_b165_6f81318bd6a7.slice. Nov 23 22:58:49.673681 kubelet[3322]: I1123 22:58:49.673604 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30c50e65-a97a-4ae6-b165-6f81318bd6a7-whisker-ca-bundle\") pod \"whisker-69f9f4876b-55rzk\" (UID: \"30c50e65-a97a-4ae6-b165-6f81318bd6a7\") " pod="calico-system/whisker-69f9f4876b-55rzk" Nov 23 22:58:49.673985 kubelet[3322]: I1123 22:58:49.673816 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/30c50e65-a97a-4ae6-b165-6f81318bd6a7-whisker-backend-key-pair\") pod \"whisker-69f9f4876b-55rzk\" (UID: \"30c50e65-a97a-4ae6-b165-6f81318bd6a7\") " pod="calico-system/whisker-69f9f4876b-55rzk" Nov 23 22:58:49.675349 kubelet[3322]: I1123 22:58:49.674201 3322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tm79\" (UniqueName: \"kubernetes.io/projected/30c50e65-a97a-4ae6-b165-6f81318bd6a7-kube-api-access-8tm79\") pod \"whisker-69f9f4876b-55rzk\" (UID: \"30c50e65-a97a-4ae6-b165-6f81318bd6a7\") " pod="calico-system/whisker-69f9f4876b-55rzk" Nov 23 22:58:49.753307 kubelet[3322]: I1123 22:58:49.752889 3322 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a862233-e232-405f-aea9-b959cf926288" path="/var/lib/kubelet/pods/5a862233-e232-405f-aea9-b959cf926288/volumes" Nov 23 22:58:49.892328 containerd[2005]: time="2025-11-23T22:58:49.892019456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f9f4876b-55rzk,Uid:30c50e65-a97a-4ae6-b165-6f81318bd6a7,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:50.234725 (udev-worker)[4637]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:58:50.241155 systemd-networkd[1888]: cali26690c1cf35: Link UP Nov 23 22:58:50.241847 systemd-networkd[1888]: cali26690c1cf35: Gained carrier Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:49.939 [INFO][4692] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.023 [INFO][4692] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0 whisker-69f9f4876b- calico-system 30c50e65-a97a-4ae6-b165-6f81318bd6a7 923 0 2025-11-23 22:58:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69f9f4876b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-17-147 whisker-69f9f4876b-55rzk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali26690c1cf35 [] [] }} ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Namespace="calico-system" Pod="whisker-69f9f4876b-55rzk" WorkloadEndpoint="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.024 [INFO][4692] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Namespace="calico-system" Pod="whisker-69f9f4876b-55rzk" WorkloadEndpoint="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.106 [INFO][4703] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" HandleID="k8s-pod-network.5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Workload="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.107 [INFO][4703] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" HandleID="k8s-pod-network.5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Workload="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-147", "pod":"whisker-69f9f4876b-55rzk", "timestamp":"2025-11-23 22:58:50.106830005 +0000 UTC"}, Hostname:"ip-172-31-17-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.107 [INFO][4703] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.107 [INFO][4703] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.107 [INFO][4703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-147' Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.130 [INFO][4703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" host="ip-172-31-17-147" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.143 [INFO][4703] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-147" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.152 [INFO][4703] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.157 [INFO][4703] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.164 [INFO][4703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.164 [INFO][4703] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" host="ip-172-31-17-147" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.167 [INFO][4703] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.175 [INFO][4703] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" host="ip-172-31-17-147" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.194 [INFO][4703] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.129/26] block=192.168.89.128/26 handle="k8s-pod-network.5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" host="ip-172-31-17-147" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.194 [INFO][4703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.129/26] handle="k8s-pod-network.5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" host="ip-172-31-17-147" Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.194 [INFO][4703] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:58:50.296716 containerd[2005]: 2025-11-23 22:58:50.194 [INFO][4703] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.129/26] IPv6=[] ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" HandleID="k8s-pod-network.5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Workload="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" Nov 23 22:58:50.301562 containerd[2005]: 2025-11-23 22:58:50.213 [INFO][4692] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Namespace="calico-system" Pod="whisker-69f9f4876b-55rzk" WorkloadEndpoint="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0", GenerateName:"whisker-69f9f4876b-", Namespace:"calico-system", SelfLink:"", UID:"30c50e65-a97a-4ae6-b165-6f81318bd6a7", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69f9f4876b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"", Pod:"whisker-69f9f4876b-55rzk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26690c1cf35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:50.301562 containerd[2005]: 2025-11-23 22:58:50.214 [INFO][4692] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.129/32] ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Namespace="calico-system" Pod="whisker-69f9f4876b-55rzk" WorkloadEndpoint="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" Nov 23 22:58:50.301562 containerd[2005]: 2025-11-23 22:58:50.214 [INFO][4692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26690c1cf35 ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Namespace="calico-system" Pod="whisker-69f9f4876b-55rzk" WorkloadEndpoint="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" Nov 23 22:58:50.301562 containerd[2005]: 2025-11-23 22:58:50.244 [INFO][4692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Namespace="calico-system" Pod="whisker-69f9f4876b-55rzk" WorkloadEndpoint="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" Nov 23 22:58:50.301562 containerd[2005]: 2025-11-23 22:58:50.246 [INFO][4692] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Namespace="calico-system" Pod="whisker-69f9f4876b-55rzk" WorkloadEndpoint="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0", GenerateName:"whisker-69f9f4876b-", Namespace:"calico-system", SelfLink:"", UID:"30c50e65-a97a-4ae6-b165-6f81318bd6a7", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69f9f4876b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d", Pod:"whisker-69f9f4876b-55rzk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali26690c1cf35", MAC:"16:25:ef:33:74:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:50.301562 containerd[2005]: 2025-11-23 22:58:50.291 [INFO][4692] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" Namespace="calico-system" Pod="whisker-69f9f4876b-55rzk" WorkloadEndpoint="ip--172--31--17--147-k8s-whisker--69f9f4876b--55rzk-eth0" Nov 23 22:58:50.359348 containerd[2005]: time="2025-11-23T22:58:50.358473558Z" level=info msg="connecting to shim 5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d" address="unix:///run/containerd/s/da15ceff47ba3fb282ad0132e83141ff4c9177d7280e40c79b9a189c9b52a24f" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:50.423865 systemd[1]: Started cri-containerd-5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d.scope - libcontainer container 5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d. Nov 23 22:58:50.507190 containerd[2005]: time="2025-11-23T22:58:50.506992651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f9f4876b-55rzk,Uid:30c50e65-a97a-4ae6-b165-6f81318bd6a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fb769c457c4c6e2608d8c7663c703a4d74c127ebb930908c2bf0c4c5fb3e33d\"" Nov 23 22:58:50.512584 containerd[2005]: time="2025-11-23T22:58:50.512535187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:58:50.793437 containerd[2005]: time="2025-11-23T22:58:50.793224632Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:50.795753 containerd[2005]: time="2025-11-23T22:58:50.795608168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:58:50.796024 containerd[2005]: time="2025-11-23T22:58:50.795937268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:58:50.796357 kubelet[3322]: E1123 22:58:50.796294 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:58:50.796905 kubelet[3322]: E1123 22:58:50.796389 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:58:50.807453 kubelet[3322]: E1123 22:58:50.806221 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:49a5b0f5daf440afb726a29c7c6e8f8b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tm79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69f9f4876b-55rzk_calico-system(30c50e65-a97a-4ae6-b165-6f81318bd6a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:50.810516 containerd[2005]: time="2025-11-23T22:58:50.810454892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:58:51.102585 containerd[2005]: time="2025-11-23T22:58:51.102430650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:51.104920 containerd[2005]: time="2025-11-23T22:58:51.104821218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:58:51.105156 containerd[2005]: time="2025-11-23T22:58:51.104900982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:58:51.105478 kubelet[3322]: E1123 22:58:51.105419 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:58:51.105585 kubelet[3322]: E1123 22:58:51.105494 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:58:51.105750 kubelet[3322]: E1123 22:58:51.105661 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tm79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69f9f4876b-55rzk_calico-system(30c50e65-a97a-4ae6-b165-6f81318bd6a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:51.107384 kubelet[3322]: E1123 22:58:51.107302 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 22:58:51.138701 kubelet[3322]: E1123 22:58:51.138615 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 22:58:51.414150 systemd-networkd[1888]: cali26690c1cf35: Gained IPv6LL Nov 23 22:58:52.013693 (udev-worker)[4638]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:58:52.019890 systemd-networkd[1888]: vxlan.calico: Link UP Nov 23 22:58:52.019908 systemd-networkd[1888]: vxlan.calico: Gained carrier Nov 23 22:58:52.138075 kubelet[3322]: E1123 22:58:52.137852 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 22:58:53.192732 systemd[1]: Started sshd@7-172.31.17.147:22-139.178.68.195:59264.service - OpenSSH per-connection server daemon (139.178.68.195:59264). Nov 23 22:58:53.445141 sshd[5016]: Accepted publickey for core from 139.178.68.195 port 59264 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:53.448154 sshd-session[5016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:53.457555 systemd-logind[1974]: New session 8 of user core. Nov 23 22:58:53.470503 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 22:58:53.748741 containerd[2005]: time="2025-11-23T22:58:53.748566947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855476946d-hc826,Uid:ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:58:53.788313 sshd[5019]: Connection closed by 139.178.68.195 port 59264 Nov 23 22:58:53.789734 sshd-session[5016]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:53.800363 systemd[1]: sshd@7-172.31.17.147:22-139.178.68.195:59264.service: Deactivated successfully. Nov 23 22:58:53.806695 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 22:58:53.811978 systemd-logind[1974]: Session 8 logged out. Waiting for processes to exit. Nov 23 22:58:53.815670 systemd-logind[1974]: Removed session 8. Nov 23 22:58:53.975474 systemd-networkd[1888]: vxlan.calico: Gained IPv6LL Nov 23 22:58:54.006751 systemd-networkd[1888]: calia0516560302: Link UP Nov 23 22:58:54.009502 systemd-networkd[1888]: calia0516560302: Gained carrier Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.859 [INFO][5036] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0 calico-apiserver-855476946d- calico-apiserver ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef 854 0 2025-11-23 22:58:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855476946d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-147 calico-apiserver-855476946d-hc826 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia0516560302 [] [] }} ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-hc826" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.859 [INFO][5036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-hc826" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.925 [INFO][5050] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" HandleID="k8s-pod-network.4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Workload="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.927 [INFO][5050] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" HandleID="k8s-pod-network.4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Workload="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-147", "pod":"calico-apiserver-855476946d-hc826", "timestamp":"2025-11-23 22:58:53.925242828 +0000 UTC"}, Hostname:"ip-172-31-17-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.927 [INFO][5050] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.927 [INFO][5050] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.927 [INFO][5050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-147' Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.943 [INFO][5050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" host="ip-172-31-17-147" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.951 [INFO][5050] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-147" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.958 [INFO][5050] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.962 [INFO][5050] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.966 [INFO][5050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.966 [INFO][5050] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" host="ip-172-31-17-147" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.969 [INFO][5050] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.984 [INFO][5050] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" host="ip-172-31-17-147" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.993 [INFO][5050] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.130/26] block=192.168.89.128/26 handle="k8s-pod-network.4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" host="ip-172-31-17-147" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.994 [INFO][5050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.130/26] handle="k8s-pod-network.4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" host="ip-172-31-17-147" Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.994 [INFO][5050] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:58:54.041675 containerd[2005]: 2025-11-23 22:58:53.994 [INFO][5050] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.130/26] IPv6=[] ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" HandleID="k8s-pod-network.4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Workload="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" Nov 23 22:58:54.043968 containerd[2005]: 2025-11-23 22:58:53.999 [INFO][5036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-hc826" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0", GenerateName:"calico-apiserver-855476946d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855476946d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"", Pod:"calico-apiserver-855476946d-hc826", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0516560302", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:54.043968 containerd[2005]: 2025-11-23 22:58:53.999 [INFO][5036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.130/32] ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-hc826" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" Nov 23 22:58:54.043968 containerd[2005]: 2025-11-23 22:58:54.000 [INFO][5036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0516560302 ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-hc826" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" Nov 23 22:58:54.043968 containerd[2005]: 2025-11-23 22:58:54.010 [INFO][5036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-hc826" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" Nov 23 22:58:54.043968 containerd[2005]: 2025-11-23 22:58:54.011 [INFO][5036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-hc826" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0", GenerateName:"calico-apiserver-855476946d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855476946d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae", Pod:"calico-apiserver-855476946d-hc826", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0516560302", MAC:"66:37:7e:c3:69:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:54.043968 containerd[2005]: 2025-11-23 22:58:54.032 [INFO][5036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-hc826" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--hc826-eth0" Nov 23 22:58:54.094492 containerd[2005]: time="2025-11-23T22:58:54.094418156Z" level=info msg="connecting to shim 4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae" address="unix:///run/containerd/s/c47f1c3a9a7af7d7dab8f7557770b8a40e62e710d3ac25930f84de4652864969" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:54.147581 systemd[1]: Started cri-containerd-4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae.scope - libcontainer container 4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae. Nov 23 22:58:54.229348 containerd[2005]: time="2025-11-23T22:58:54.229292277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855476946d-hc826,Uid:ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4f4436c0f258a83f16a34b37d8af2961fd8c49ab1c034ca73cfbdf06d463e3ae\"" Nov 23 22:58:54.234693 containerd[2005]: time="2025-11-23T22:58:54.233613525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:58:54.494907 containerd[2005]: time="2025-11-23T22:58:54.494827114Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:54.497495 containerd[2005]: time="2025-11-23T22:58:54.497405626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:58:54.497898 containerd[2005]: time="2025-11-23T22:58:54.497574298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:58:54.498577 kubelet[3322]: E1123 22:58:54.497750 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:58:54.498577 kubelet[3322]: E1123 22:58:54.497812 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:58:54.498577 kubelet[3322]: E1123 22:58:54.498001 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqkhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-855476946d-hc826_calico-apiserver(ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:54.500339 kubelet[3322]: E1123 22:58:54.499320 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 22:58:54.747105 containerd[2005]: time="2025-11-23T22:58:54.746596512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f2mxq,Uid:dbdf1f8a-97eb-41f6-84e0-31293d800724,Namespace:kube-system,Attempt:0,}" Nov 23 22:58:54.748855 containerd[2005]: time="2025-11-23T22:58:54.748761444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d46955649-8px8j,Uid:efcb5707-de3f-40a1-84e7-2d29faf16856,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:55.056882 systemd-networkd[1888]: cali0b15b959a9d: Link UP Nov 23 22:58:55.058936 systemd-networkd[1888]: cali0b15b959a9d: Gained carrier Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:54.857 [INFO][5114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0 calico-kube-controllers-5d46955649- calico-system efcb5707-de3f-40a1-84e7-2d29faf16856 851 0 2025-11-23 22:58:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d46955649 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-17-147 calico-kube-controllers-5d46955649-8px8j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0b15b959a9d [] [] }} ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Namespace="calico-system" Pod="calico-kube-controllers-5d46955649-8px8j" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:54.858 [INFO][5114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Namespace="calico-system" Pod="calico-kube-controllers-5d46955649-8px8j" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:54.942 [INFO][5135] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" HandleID="k8s-pod-network.6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Workload="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:54.943 [INFO][5135] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" HandleID="k8s-pod-network.6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Workload="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-147", "pod":"calico-kube-controllers-5d46955649-8px8j", "timestamp":"2025-11-23 22:58:54.942820837 +0000 UTC"}, Hostname:"ip-172-31-17-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:54.943 [INFO][5135] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:54.943 [INFO][5135] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:54.943 [INFO][5135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-147' Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:54.967 [INFO][5135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" host="ip-172-31-17-147" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:54.992 [INFO][5135] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-147" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.001 [INFO][5135] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.005 [INFO][5135] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.010 [INFO][5135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.010 [INFO][5135] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" host="ip-172-31-17-147" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.013 [INFO][5135] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39 Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.020 [INFO][5135] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" host="ip-172-31-17-147" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.034 [INFO][5135] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.131/26] block=192.168.89.128/26 handle="k8s-pod-network.6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" host="ip-172-31-17-147" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.034 [INFO][5135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.131/26] handle="k8s-pod-network.6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" host="ip-172-31-17-147" Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.034 [INFO][5135] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:58:55.099445 containerd[2005]: 2025-11-23 22:58:55.035 [INFO][5135] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.131/26] IPv6=[] ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" HandleID="k8s-pod-network.6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Workload="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" Nov 23 22:58:55.100495 containerd[2005]: 2025-11-23 22:58:55.041 [INFO][5114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Namespace="calico-system" Pod="calico-kube-controllers-5d46955649-8px8j" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0", GenerateName:"calico-kube-controllers-5d46955649-", Namespace:"calico-system", SelfLink:"", UID:"efcb5707-de3f-40a1-84e7-2d29faf16856", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d46955649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"", Pod:"calico-kube-controllers-5d46955649-8px8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b15b959a9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:55.100495 containerd[2005]: 2025-11-23 22:58:55.041 [INFO][5114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.131/32] ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Namespace="calico-system" Pod="calico-kube-controllers-5d46955649-8px8j" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" Nov 23 22:58:55.100495 containerd[2005]: 2025-11-23 22:58:55.042 [INFO][5114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b15b959a9d ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Namespace="calico-system" Pod="calico-kube-controllers-5d46955649-8px8j" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" Nov 23 22:58:55.100495 containerd[2005]: 2025-11-23 22:58:55.061 [INFO][5114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Namespace="calico-system" Pod="calico-kube-controllers-5d46955649-8px8j" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" Nov 23 22:58:55.100495 containerd[2005]: 2025-11-23 22:58:55.062 [INFO][5114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Namespace="calico-system" Pod="calico-kube-controllers-5d46955649-8px8j" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0", GenerateName:"calico-kube-controllers-5d46955649-", Namespace:"calico-system", SelfLink:"", UID:"efcb5707-de3f-40a1-84e7-2d29faf16856", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d46955649", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39", Pod:"calico-kube-controllers-5d46955649-8px8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b15b959a9d", MAC:"9e:b6:42:ae:3e:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:55.100495 containerd[2005]: 2025-11-23 22:58:55.087 [INFO][5114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" Namespace="calico-system" Pod="calico-kube-controllers-5d46955649-8px8j" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--kube--controllers--5d46955649--8px8j-eth0" Nov 23 22:58:55.166403 kubelet[3322]: E1123 22:58:55.165645 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 22:58:55.203835 containerd[2005]: time="2025-11-23T22:58:55.203326450Z" level=info msg="connecting to shim 6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39" address="unix:///run/containerd/s/22e264da0890cf1c9d3b4231e0d5899f00b849bbe357216f2961df35a3adabb8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:55.284667 systemd[1]: Started cri-containerd-6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39.scope - libcontainer container 6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39. Nov 23 22:58:55.286278 systemd-networkd[1888]: cali79749b4169b: Link UP Nov 23 22:58:55.295959 systemd-networkd[1888]: cali79749b4169b: Gained carrier Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:54.864 [INFO][5112] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0 coredns-668d6bf9bc- kube-system dbdf1f8a-97eb-41f6-84e0-31293d800724 848 0 2025-11-23 22:58:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-147 coredns-668d6bf9bc-f2mxq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79749b4169b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-f2mxq" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:54.865 [INFO][5112] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-f2mxq" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:54.951 [INFO][5140] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" HandleID="k8s-pod-network.4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Workload="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:54.951 [INFO][5140] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" HandleID="k8s-pod-network.4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Workload="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000291890), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-147", "pod":"coredns-668d6bf9bc-f2mxq", "timestamp":"2025-11-23 22:58:54.951374389 +0000 UTC"}, Hostname:"ip-172-31-17-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:54.952 [INFO][5140] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.035 [INFO][5140] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.035 [INFO][5140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-147' Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.073 [INFO][5140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" host="ip-172-31-17-147" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.096 [INFO][5140] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-147" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.122 [INFO][5140] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.129 [INFO][5140] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.136 [INFO][5140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.136 [INFO][5140] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" host="ip-172-31-17-147" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.155 [INFO][5140] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.189 [INFO][5140] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" host="ip-172-31-17-147" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.233 [INFO][5140] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.132/26] block=192.168.89.128/26 handle="k8s-pod-network.4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" host="ip-172-31-17-147" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.233 [INFO][5140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.132/26] handle="k8s-pod-network.4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" host="ip-172-31-17-147" Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.238 [INFO][5140] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:58:55.337563 containerd[2005]: 2025-11-23 22:58:55.238 [INFO][5140] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.132/26] IPv6=[] ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" HandleID="k8s-pod-network.4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Workload="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" Nov 23 22:58:55.341066 containerd[2005]: 2025-11-23 22:58:55.250 [INFO][5112] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-f2mxq" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dbdf1f8a-97eb-41f6-84e0-31293d800724", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"", Pod:"coredns-668d6bf9bc-f2mxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79749b4169b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:55.341066 containerd[2005]: 2025-11-23 22:58:55.251 [INFO][5112] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.132/32] ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-f2mxq" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" Nov 23 22:58:55.341066 containerd[2005]: 2025-11-23 22:58:55.252 [INFO][5112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79749b4169b ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-f2mxq" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" Nov 23 22:58:55.341066 containerd[2005]: 2025-11-23 22:58:55.297 [INFO][5112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-f2mxq" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" Nov 23 22:58:55.341066 containerd[2005]: 2025-11-23 22:58:55.299 [INFO][5112] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-f2mxq" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dbdf1f8a-97eb-41f6-84e0-31293d800724", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa", Pod:"coredns-668d6bf9bc-f2mxq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79749b4169b", MAC:"da:72:b8:04:4a:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:55.344004 containerd[2005]: 2025-11-23 22:58:55.330 [INFO][5112] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-f2mxq" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--f2mxq-eth0" Nov 23 22:58:55.408012 containerd[2005]: time="2025-11-23T22:58:55.407853167Z" level=info msg="connecting to shim 4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa" address="unix:///run/containerd/s/c6832fc3525d8cfa4933fb795e3acd4b80fa36cc208209124d6a533b7a0506ce" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:55.491520 systemd[1]: Started cri-containerd-4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa.scope - libcontainer container 4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa. Nov 23 22:58:55.559930 containerd[2005]: time="2025-11-23T22:58:55.559796376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d46955649-8px8j,Uid:efcb5707-de3f-40a1-84e7-2d29faf16856,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a0bba1f9aa70f764e32e46e6b1665f8bc317f849b4c32e360dd570eea08eb39\"" Nov 23 22:58:55.567790 containerd[2005]: time="2025-11-23T22:58:55.567701952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:58:55.637566 systemd-networkd[1888]: calia0516560302: Gained IPv6LL Nov 23 22:58:55.749435 containerd[2005]: time="2025-11-23T22:58:55.749386729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855476946d-znnxr,Uid:d24d7369-6494-4a66-8309-347720b5fc56,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:58:55.800691 containerd[2005]: time="2025-11-23T22:58:55.800608921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f2mxq,Uid:dbdf1f8a-97eb-41f6-84e0-31293d800724,Namespace:kube-system,Attempt:0,} returns sandbox id \"4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa\"" Nov 23 22:58:55.810954 containerd[2005]: time="2025-11-23T22:58:55.810885325Z" level=info msg="CreateContainer within sandbox \"4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:58:55.828631 containerd[2005]: time="2025-11-23T22:58:55.827448661Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:55.829664 containerd[2005]: time="2025-11-23T22:58:55.829589533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:58:55.829793 containerd[2005]: time="2025-11-23T22:58:55.829724497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:58:55.830153 kubelet[3322]: E1123 22:58:55.830040 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:58:55.831848 kubelet[3322]: E1123 22:58:55.830194 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:58:55.831848 kubelet[3322]: E1123 22:58:55.830515 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cf7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d46955649-8px8j_calico-system(efcb5707-de3f-40a1-84e7-2d29faf16856): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:55.832779 kubelet[3322]: E1123 22:58:55.832357 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 22:58:55.866113 containerd[2005]: time="2025-11-23T22:58:55.866008861Z" level=info msg="Container 2133b77cf7c34e5088e834eaf1caf5e5109d1f3d7ef480b92fcb8066163bdbde: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:55.879073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2511787555.mount: Deactivated successfully. Nov 23 22:58:55.916793 containerd[2005]: time="2025-11-23T22:58:55.915882373Z" level=info msg="CreateContainer within sandbox \"4eac335ff57f6c2632901d5b74f8d5aa6223c0b96d0a6fdfcd904129ac39c9fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2133b77cf7c34e5088e834eaf1caf5e5109d1f3d7ef480b92fcb8066163bdbde\"" Nov 23 22:58:55.917479 containerd[2005]: time="2025-11-23T22:58:55.917407201Z" level=info msg="StartContainer for \"2133b77cf7c34e5088e834eaf1caf5e5109d1f3d7ef480b92fcb8066163bdbde\"" Nov 23 22:58:55.920658 containerd[2005]: time="2025-11-23T22:58:55.920564425Z" level=info msg="connecting to shim 2133b77cf7c34e5088e834eaf1caf5e5109d1f3d7ef480b92fcb8066163bdbde" address="unix:///run/containerd/s/c6832fc3525d8cfa4933fb795e3acd4b80fa36cc208209124d6a533b7a0506ce" protocol=ttrpc version=3 Nov 23 22:58:56.010014 systemd[1]: Started cri-containerd-2133b77cf7c34e5088e834eaf1caf5e5109d1f3d7ef480b92fcb8066163bdbde.scope - libcontainer container 2133b77cf7c34e5088e834eaf1caf5e5109d1f3d7ef480b92fcb8066163bdbde. Nov 23 22:58:56.138815 containerd[2005]: time="2025-11-23T22:58:56.137680991Z" level=info msg="StartContainer for \"2133b77cf7c34e5088e834eaf1caf5e5109d1f3d7ef480b92fcb8066163bdbde\" returns successfully" Nov 23 22:58:56.173461 kubelet[3322]: E1123 22:58:56.173019 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 22:58:56.178668 kubelet[3322]: E1123 22:58:56.178592 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 22:58:56.313634 systemd-networkd[1888]: cali07a6c98d6af: Link UP Nov 23 22:58:56.315425 systemd-networkd[1888]: cali07a6c98d6af: Gained carrier Nov 23 22:58:56.362881 kubelet[3322]: I1123 22:58:56.360811 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f2mxq" podStartSLOduration=51.360785112 podStartE2EDuration="51.360785112s" podCreationTimestamp="2025-11-23 22:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:56.285226223 +0000 UTC m=+56.804302903" watchObservedRunningTime="2025-11-23 22:58:56.360785112 +0000 UTC m=+56.879861780" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:55.995 [INFO][5267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0 calico-apiserver-855476946d- calico-apiserver d24d7369-6494-4a66-8309-347720b5fc56 855 0 2025-11-23 22:58:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:855476946d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-17-147 calico-apiserver-855476946d-znnxr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07a6c98d6af [] [] }} ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-znnxr" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:55.996 [INFO][5267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-znnxr" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.119 [INFO][5294] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" HandleID="k8s-pod-network.69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Workload="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.120 [INFO][5294] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" HandleID="k8s-pod-network.69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Workload="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001036e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-17-147", "pod":"calico-apiserver-855476946d-znnxr", "timestamp":"2025-11-23 22:58:56.119951362 +0000 UTC"}, Hostname:"ip-172-31-17-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.122 [INFO][5294] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.122 [INFO][5294] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.123 [INFO][5294] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-147' Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.153 [INFO][5294] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" host="ip-172-31-17-147" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.166 [INFO][5294] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-147" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.190 [INFO][5294] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.202 [INFO][5294] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.212 [INFO][5294] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.213 [INFO][5294] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" host="ip-172-31-17-147" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.228 [INFO][5294] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930 Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.253 [INFO][5294] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" host="ip-172-31-17-147" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.297 [INFO][5294] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.133/26] block=192.168.89.128/26 handle="k8s-pod-network.69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" host="ip-172-31-17-147" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.297 [INFO][5294] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.133/26] handle="k8s-pod-network.69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" host="ip-172-31-17-147" Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.297 [INFO][5294] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:58:56.367097 containerd[2005]: 2025-11-23 22:58:56.297 [INFO][5294] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.133/26] IPv6=[] ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" HandleID="k8s-pod-network.69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Workload="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" Nov 23 22:58:56.369055 containerd[2005]: 2025-11-23 22:58:56.303 [INFO][5267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-znnxr" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0", GenerateName:"calico-apiserver-855476946d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d24d7369-6494-4a66-8309-347720b5fc56", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855476946d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"", Pod:"calico-apiserver-855476946d-znnxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07a6c98d6af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:56.369055 containerd[2005]: 2025-11-23 22:58:56.303 [INFO][5267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.133/32] ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-znnxr" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" Nov 23 22:58:56.369055 containerd[2005]: 2025-11-23 22:58:56.304 [INFO][5267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07a6c98d6af ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-znnxr" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" Nov 23 22:58:56.369055 containerd[2005]: 2025-11-23 22:58:56.318 [INFO][5267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-znnxr" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" Nov 23 22:58:56.369055 containerd[2005]: 2025-11-23 22:58:56.326 [INFO][5267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-znnxr" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0", GenerateName:"calico-apiserver-855476946d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d24d7369-6494-4a66-8309-347720b5fc56", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"855476946d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930", Pod:"calico-apiserver-855476946d-znnxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07a6c98d6af", MAC:"ee:f4:3e:dd:61:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:56.369055 containerd[2005]: 2025-11-23 22:58:56.355 [INFO][5267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" Namespace="calico-apiserver" Pod="calico-apiserver-855476946d-znnxr" WorkloadEndpoint="ip--172--31--17--147-k8s-calico--apiserver--855476946d--znnxr-eth0" Nov 23 22:58:56.421557 containerd[2005]: time="2025-11-23T22:58:56.421486092Z" level=info msg="connecting to shim 69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930" address="unix:///run/containerd/s/8614e2daea200582a29304c2e3616652b65cf233c79d6e5e5c45ad112849cfa8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:56.483723 systemd[1]: Started cri-containerd-69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930.scope - libcontainer container 69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930. Nov 23 22:58:56.600444 containerd[2005]: time="2025-11-23T22:58:56.600348637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-855476946d-znnxr,Uid:d24d7369-6494-4a66-8309-347720b5fc56,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"69d984339a8cf7cb5d13c28f7400e2b460089a4fa77db4f3254109c9601b4930\"" Nov 23 22:58:56.605344 containerd[2005]: time="2025-11-23T22:58:56.605215525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:58:56.749527 containerd[2005]: time="2025-11-23T22:58:56.747193718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hsvdw,Uid:56da4e3d-05e9-4599-8060-52650f1b8e04,Namespace:kube-system,Attempt:0,}" Nov 23 22:58:56.754688 containerd[2005]: time="2025-11-23T22:58:56.754616090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rz2c9,Uid:b6239d0a-f247-4ff7-8f39-2d2983756ead,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:56.794542 systemd-networkd[1888]: cali0b15b959a9d: Gained IPv6LL Nov 23 22:58:56.913489 containerd[2005]: time="2025-11-23T22:58:56.913418366Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:56.916500 containerd[2005]: time="2025-11-23T22:58:56.916438634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:58:56.917080 containerd[2005]: time="2025-11-23T22:58:56.916955474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:58:56.918413 kubelet[3322]: E1123 22:58:56.917349 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:58:56.918413 kubelet[3322]: E1123 22:58:56.917414 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:58:56.919440 kubelet[3322]: E1123 22:58:56.918499 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8ffj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-855476946d-znnxr_calico-apiserver(d24d7369-6494-4a66-8309-347720b5fc56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:56.920823 kubelet[3322]: E1123 22:58:56.920748 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 22:58:56.983184 systemd-networkd[1888]: cali79749b4169b: Gained IPv6LL Nov 23 22:58:57.131354 systemd-networkd[1888]: cali3222e280c00: Link UP Nov 23 22:58:57.135168 systemd-networkd[1888]: cali3222e280c00: Gained carrier Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:56.954 [INFO][5386] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0 csi-node-driver- calico-system b6239d0a-f247-4ff7-8f39-2d2983756ead 750 0 2025-11-23 22:58:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-17-147 csi-node-driver-rz2c9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3222e280c00 [] [] }} ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Namespace="calico-system" Pod="csi-node-driver-rz2c9" WorkloadEndpoint="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:56.954 [INFO][5386] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Namespace="calico-system" Pod="csi-node-driver-rz2c9" WorkloadEndpoint="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.039 [INFO][5401] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" HandleID="k8s-pod-network.1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Workload="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.039 [INFO][5401] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" HandleID="k8s-pod-network.1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Workload="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031b700), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-147", "pod":"csi-node-driver-rz2c9", "timestamp":"2025-11-23 22:58:57.039370295 +0000 UTC"}, Hostname:"ip-172-31-17-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.039 [INFO][5401] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.039 [INFO][5401] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.039 [INFO][5401] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-147' Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.059 [INFO][5401] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" host="ip-172-31-17-147" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.069 [INFO][5401] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-147" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.076 [INFO][5401] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.080 [INFO][5401] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.085 [INFO][5401] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.085 [INFO][5401] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" host="ip-172-31-17-147" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.091 [INFO][5401] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310 Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.100 [INFO][5401] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" host="ip-172-31-17-147" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.114 [INFO][5401] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.134/26] block=192.168.89.128/26 handle="k8s-pod-network.1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" host="ip-172-31-17-147" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.114 [INFO][5401] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.134/26] handle="k8s-pod-network.1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" host="ip-172-31-17-147" Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.114 [INFO][5401] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:58:57.191933 containerd[2005]: 2025-11-23 22:58:57.114 [INFO][5401] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.134/26] IPv6=[] ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" HandleID="k8s-pod-network.1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Workload="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" Nov 23 22:58:57.195536 containerd[2005]: 2025-11-23 22:58:57.120 [INFO][5386] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Namespace="calico-system" Pod="csi-node-driver-rz2c9" WorkloadEndpoint="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6239d0a-f247-4ff7-8f39-2d2983756ead", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"", Pod:"csi-node-driver-rz2c9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3222e280c00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:57.195536 containerd[2005]: 2025-11-23 22:58:57.120 [INFO][5386] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.134/32] ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Namespace="calico-system" Pod="csi-node-driver-rz2c9" WorkloadEndpoint="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" Nov 23 22:58:57.195536 containerd[2005]: 2025-11-23 22:58:57.121 [INFO][5386] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3222e280c00 ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Namespace="calico-system" Pod="csi-node-driver-rz2c9" WorkloadEndpoint="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" Nov 23 22:58:57.195536 containerd[2005]: 2025-11-23 22:58:57.142 [INFO][5386] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Namespace="calico-system" Pod="csi-node-driver-rz2c9" WorkloadEndpoint="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" Nov 23 22:58:57.195536 containerd[2005]: 2025-11-23 22:58:57.146 [INFO][5386] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Namespace="calico-system" Pod="csi-node-driver-rz2c9" WorkloadEndpoint="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b6239d0a-f247-4ff7-8f39-2d2983756ead", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310", Pod:"csi-node-driver-rz2c9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3222e280c00", MAC:"22:37:3a:99:41:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:57.195536 containerd[2005]: 2025-11-23 22:58:57.179 [INFO][5386] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" Namespace="calico-system" Pod="csi-node-driver-rz2c9" WorkloadEndpoint="ip--172--31--17--147-k8s-csi--node--driver--rz2c9-eth0" Nov 23 22:58:57.200111 kubelet[3322]: E1123 22:58:57.199781 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 22:58:57.201319 kubelet[3322]: E1123 22:58:57.200780 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 22:58:57.287613 containerd[2005]: time="2025-11-23T22:58:57.287532660Z" level=info msg="connecting to shim 1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310" address="unix:///run/containerd/s/e5a0b2ef574f923c20a42e470be69426dfee31dd11c21d36f07b2fdcab93b676" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:57.330730 systemd-networkd[1888]: cali2a3400e38ec: Link UP Nov 23 22:58:57.335624 systemd-networkd[1888]: cali2a3400e38ec: Gained carrier Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:56.953 [INFO][5376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0 coredns-668d6bf9bc- kube-system 56da4e3d-05e9-4599-8060-52650f1b8e04 858 0 2025-11-23 22:58:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-17-147 coredns-668d6bf9bc-hsvdw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2a3400e38ec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Namespace="kube-system" Pod="coredns-668d6bf9bc-hsvdw" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:56.954 [INFO][5376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Namespace="kube-system" Pod="coredns-668d6bf9bc-hsvdw" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.049 [INFO][5399] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" HandleID="k8s-pod-network.d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Workload="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.050 [INFO][5399] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" HandleID="k8s-pod-network.d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Workload="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103d20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-17-147", "pod":"coredns-668d6bf9bc-hsvdw", "timestamp":"2025-11-23 22:58:57.049492835 +0000 UTC"}, Hostname:"ip-172-31-17-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.050 [INFO][5399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.114 [INFO][5399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.115 [INFO][5399] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-147' Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.163 [INFO][5399] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" host="ip-172-31-17-147" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.182 [INFO][5399] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-147" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.205 [INFO][5399] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.213 [INFO][5399] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.228 [INFO][5399] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.228 [INFO][5399] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" host="ip-172-31-17-147" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.236 [INFO][5399] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468 Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.257 [INFO][5399] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" host="ip-172-31-17-147" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.292 [INFO][5399] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.135/26] block=192.168.89.128/26 handle="k8s-pod-network.d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" host="ip-172-31-17-147" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.294 [INFO][5399] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.135/26] handle="k8s-pod-network.d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" host="ip-172-31-17-147" Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.294 [INFO][5399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:58:57.388005 containerd[2005]: 2025-11-23 22:58:57.294 [INFO][5399] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.135/26] IPv6=[] ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" HandleID="k8s-pod-network.d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Workload="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" Nov 23 22:58:57.391088 containerd[2005]: 2025-11-23 22:58:57.308 [INFO][5376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Namespace="kube-system" Pod="coredns-668d6bf9bc-hsvdw" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"56da4e3d-05e9-4599-8060-52650f1b8e04", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"", Pod:"coredns-668d6bf9bc-hsvdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a3400e38ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:57.391088 containerd[2005]: 2025-11-23 22:58:57.309 [INFO][5376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.135/32] ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Namespace="kube-system" Pod="coredns-668d6bf9bc-hsvdw" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" Nov 23 22:58:57.391088 containerd[2005]: 2025-11-23 22:58:57.309 [INFO][5376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a3400e38ec ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Namespace="kube-system" Pod="coredns-668d6bf9bc-hsvdw" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" Nov 23 22:58:57.391088 containerd[2005]: 2025-11-23 22:58:57.331 [INFO][5376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Namespace="kube-system" Pod="coredns-668d6bf9bc-hsvdw" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" Nov 23 22:58:57.391088 containerd[2005]: 2025-11-23 22:58:57.333 [INFO][5376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Namespace="kube-system" Pod="coredns-668d6bf9bc-hsvdw" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"56da4e3d-05e9-4599-8060-52650f1b8e04", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468", Pod:"coredns-668d6bf9bc-hsvdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2a3400e38ec", MAC:"02:19:33:58:c4:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:57.393703 containerd[2005]: 2025-11-23 22:58:57.374 [INFO][5376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" Namespace="kube-system" Pod="coredns-668d6bf9bc-hsvdw" WorkloadEndpoint="ip--172--31--17--147-k8s-coredns--668d6bf9bc--hsvdw-eth0" Nov 23 22:58:57.421510 systemd[1]: Started cri-containerd-1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310.scope - libcontainer container 1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310. Nov 23 22:58:57.479184 containerd[2005]: time="2025-11-23T22:58:57.479088745Z" level=info msg="connecting to shim d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468" address="unix:///run/containerd/s/d8255e3beb6d812c69c33ff3a0078e18175469bd4d6762abd062128eb41f7265" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:57.569476 systemd[1]: Started cri-containerd-d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468.scope - libcontainer container d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468. Nov 23 22:58:57.576289 containerd[2005]: time="2025-11-23T22:58:57.576216962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rz2c9,Uid:b6239d0a-f247-4ff7-8f39-2d2983756ead,Namespace:calico-system,Attempt:0,} returns sandbox id \"1fd49f94751b31d7c94e49e509f41d5743998316237454a13e975e87b10d2310\"" Nov 23 22:58:57.579772 containerd[2005]: time="2025-11-23T22:58:57.579697586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:58:57.684330 containerd[2005]: time="2025-11-23T22:58:57.684105122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hsvdw,Uid:56da4e3d-05e9-4599-8060-52650f1b8e04,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468\"" Nov 23 22:58:57.693078 containerd[2005]: time="2025-11-23T22:58:57.693025802Z" level=info msg="CreateContainer within sandbox \"d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:58:57.714377 containerd[2005]: time="2025-11-23T22:58:57.714307142Z" level=info msg="Container 6af10f36d651d72c8c5bbb09a66c42967cda51dad7b77f223df9db5bf098f0f9: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:58:57.735969 containerd[2005]: time="2025-11-23T22:58:57.735888434Z" level=info msg="CreateContainer within sandbox \"d9b6e526a4d798f607d85803fb37f3cd92f7c694f41d4831634dba7209983468\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6af10f36d651d72c8c5bbb09a66c42967cda51dad7b77f223df9db5bf098f0f9\"" Nov 23 22:58:57.738303 containerd[2005]: time="2025-11-23T22:58:57.737800623Z" level=info msg="StartContainer for \"6af10f36d651d72c8c5bbb09a66c42967cda51dad7b77f223df9db5bf098f0f9\"" Nov 23 22:58:57.740028 containerd[2005]: time="2025-11-23T22:58:57.739977039Z" level=info msg="connecting to shim 6af10f36d651d72c8c5bbb09a66c42967cda51dad7b77f223df9db5bf098f0f9" address="unix:///run/containerd/s/d8255e3beb6d812c69c33ff3a0078e18175469bd4d6762abd062128eb41f7265" protocol=ttrpc version=3 Nov 23 22:58:57.750853 containerd[2005]: time="2025-11-23T22:58:57.750079959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zrbmg,Uid:328f5f71-5736-4873-add1-f3d5d3b7eef2,Namespace:calico-system,Attempt:0,}" Nov 23 22:58:57.824018 containerd[2005]: time="2025-11-23T22:58:57.823942443Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:57.826465 containerd[2005]: time="2025-11-23T22:58:57.826235979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:58:57.826465 containerd[2005]: time="2025-11-23T22:58:57.826414167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:58:57.826968 kubelet[3322]: E1123 22:58:57.826610 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:58:57.826968 kubelet[3322]: E1123 22:58:57.826669 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:58:57.826968 kubelet[3322]: E1123 22:58:57.826838 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pvh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:57.858017 containerd[2005]: time="2025-11-23T22:58:57.855159159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:58:57.866038 systemd[1]: Started cri-containerd-6af10f36d651d72c8c5bbb09a66c42967cda51dad7b77f223df9db5bf098f0f9.scope - libcontainer container 6af10f36d651d72c8c5bbb09a66c42967cda51dad7b77f223df9db5bf098f0f9. Nov 23 22:58:57.957928 containerd[2005]: time="2025-11-23T22:58:57.957620656Z" level=info msg="StartContainer for \"6af10f36d651d72c8c5bbb09a66c42967cda51dad7b77f223df9db5bf098f0f9\" returns successfully" Nov 23 22:58:58.127971 systemd-networkd[1888]: calic0e57ab8b5e: Link UP Nov 23 22:58:58.129977 systemd-networkd[1888]: calic0e57ab8b5e: Gained carrier Nov 23 22:58:58.134555 systemd-networkd[1888]: cali07a6c98d6af: Gained IPv6LL Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:57.946 [INFO][5542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0 goldmane-666569f655- calico-system 328f5f71-5736-4873-add1-f3d5d3b7eef2 856 0 2025-11-23 22:58:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-17-147 goldmane-666569f655-zrbmg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic0e57ab8b5e [] [] }} ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Namespace="calico-system" Pod="goldmane-666569f655-zrbmg" WorkloadEndpoint="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:57.947 [INFO][5542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Namespace="calico-system" Pod="goldmane-666569f655-zrbmg" WorkloadEndpoint="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.038 [INFO][5575] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" HandleID="k8s-pod-network.bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Workload="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.038 [INFO][5575] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" HandleID="k8s-pod-network.bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Workload="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3a40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-17-147", "pod":"goldmane-666569f655-zrbmg", "timestamp":"2025-11-23 22:58:58.038272248 +0000 UTC"}, Hostname:"ip-172-31-17-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.038 [INFO][5575] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.038 [INFO][5575] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.039 [INFO][5575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-17-147' Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.066 [INFO][5575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" host="ip-172-31-17-147" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.074 [INFO][5575] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-17-147" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.082 [INFO][5575] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.086 [INFO][5575] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.090 [INFO][5575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ip-172-31-17-147" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.090 [INFO][5575] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" host="ip-172-31-17-147" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.093 [INFO][5575] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.102 [INFO][5575] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" host="ip-172-31-17-147" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.115 [INFO][5575] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.136/26] block=192.168.89.128/26 handle="k8s-pod-network.bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" host="ip-172-31-17-147" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.115 [INFO][5575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.136/26] handle="k8s-pod-network.bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" host="ip-172-31-17-147" Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.115 [INFO][5575] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:58:58.158199 containerd[2005]: 2025-11-23 22:58:58.115 [INFO][5575] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.136/26] IPv6=[] ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" HandleID="k8s-pod-network.bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Workload="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" Nov 23 22:58:58.160615 containerd[2005]: 2025-11-23 22:58:58.120 [INFO][5542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Namespace="calico-system" Pod="goldmane-666569f655-zrbmg" WorkloadEndpoint="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"328f5f71-5736-4873-add1-f3d5d3b7eef2", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"", Pod:"goldmane-666569f655-zrbmg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0e57ab8b5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:58.160615 containerd[2005]: 2025-11-23 22:58:58.120 [INFO][5542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.136/32] ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Namespace="calico-system" Pod="goldmane-666569f655-zrbmg" WorkloadEndpoint="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" Nov 23 22:58:58.160615 containerd[2005]: 2025-11-23 22:58:58.121 [INFO][5542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0e57ab8b5e ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Namespace="calico-system" Pod="goldmane-666569f655-zrbmg" WorkloadEndpoint="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" Nov 23 22:58:58.160615 containerd[2005]: 2025-11-23 22:58:58.131 [INFO][5542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Namespace="calico-system" Pod="goldmane-666569f655-zrbmg" WorkloadEndpoint="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" Nov 23 22:58:58.160615 containerd[2005]: 2025-11-23 22:58:58.132 [INFO][5542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Namespace="calico-system" Pod="goldmane-666569f655-zrbmg" WorkloadEndpoint="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"328f5f71-5736-4873-add1-f3d5d3b7eef2", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 58, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-17-147", ContainerID:"bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f", Pod:"goldmane-666569f655-zrbmg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic0e57ab8b5e", MAC:"da:ad:8e:2c:a3:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:58:58.160615 containerd[2005]: 2025-11-23 22:58:58.151 [INFO][5542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" Namespace="calico-system" Pod="goldmane-666569f655-zrbmg" WorkloadEndpoint="ip--172--31--17--147-k8s-goldmane--666569f655--zrbmg-eth0" Nov 23 22:58:58.176139 containerd[2005]: time="2025-11-23T22:58:58.176067961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:58.181238 containerd[2005]: time="2025-11-23T22:58:58.181144357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:58:58.181392 containerd[2005]: time="2025-11-23T22:58:58.181340629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:58:58.181659 kubelet[3322]: E1123 22:58:58.181588 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:58:58.182131 kubelet[3322]: E1123 22:58:58.181665 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:58:58.182131 kubelet[3322]: E1123 22:58:58.181817 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pvh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:58.183435 kubelet[3322]: E1123 22:58:58.183354 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:58:58.215602 kubelet[3322]: E1123 22:58:58.215217 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:58:58.241024 kubelet[3322]: E1123 22:58:58.240900 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 22:58:58.251885 containerd[2005]: time="2025-11-23T22:58:58.251813029Z" level=info msg="connecting to shim bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f" address="unix:///run/containerd/s/717cd1edac5cc13a80a8c4835cd13181ed8712f20a5a5a6819bfce0de19bd67f" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:58:58.336845 kubelet[3322]: I1123 22:58:58.336741 3322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hsvdw" podStartSLOduration=53.336713593 podStartE2EDuration="53.336713593s" podCreationTimestamp="2025-11-23 22:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:58:58.336016945 +0000 UTC m=+58.855093625" watchObservedRunningTime="2025-11-23 22:58:58.336713593 +0000 UTC m=+58.855790249" Nov 23 22:58:58.347442 systemd[1]: Started cri-containerd-bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f.scope - libcontainer container bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f. Nov 23 22:58:58.576879 containerd[2005]: time="2025-11-23T22:58:58.576785463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zrbmg,Uid:328f5f71-5736-4873-add1-f3d5d3b7eef2,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf90c847257f682c9e0dcc6f94739b0075655c813c04bbb926723112e801de3f\"" Nov 23 22:58:58.583384 containerd[2005]: time="2025-11-23T22:58:58.583217247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:58:58.646692 systemd-networkd[1888]: cali2a3400e38ec: Gained IPv6LL Nov 23 22:58:58.828665 systemd[1]: Started sshd@8-172.31.17.147:22-139.178.68.195:59274.service - OpenSSH per-connection server daemon (139.178.68.195:59274). Nov 23 22:58:58.856057 containerd[2005]: time="2025-11-23T22:58:58.855985480Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:58.858341 containerd[2005]: time="2025-11-23T22:58:58.858222148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:58:58.858724 containerd[2005]: time="2025-11-23T22:58:58.858382948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:58:58.858947 kubelet[3322]: E1123 22:58:58.858631 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:58:58.858947 kubelet[3322]: E1123 22:58:58.858736 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:58:58.859524 kubelet[3322]: E1123 22:58:58.859435 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq7x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zrbmg_calico-system(328f5f71-5736-4873-add1-f3d5d3b7eef2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:58.861463 kubelet[3322]: E1123 22:58:58.861332 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 22:58:59.040438 sshd[5646]: Accepted publickey for core from 139.178.68.195 port 59274 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:59.044201 sshd-session[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:59.052701 systemd-logind[1974]: New session 9 of user core. Nov 23 22:58:59.062806 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 22:58:59.093587 systemd-networkd[1888]: cali3222e280c00: Gained IPv6LL Nov 23 22:58:59.245993 kubelet[3322]: E1123 22:58:59.245864 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 22:58:59.254970 kubelet[3322]: E1123 22:58:59.254859 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:58:59.350444 systemd-networkd[1888]: calic0e57ab8b5e: Gained IPv6LL Nov 23 22:58:59.409380 sshd[5649]: Connection closed by 139.178.68.195 port 59274 Nov 23 22:58:59.411395 sshd-session[5646]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:59.419190 systemd[1]: sshd@8-172.31.17.147:22-139.178.68.195:59274.service: Deactivated successfully. Nov 23 22:58:59.425816 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 22:58:59.432669 systemd-logind[1974]: Session 9 logged out. Waiting for processes to exit. Nov 23 22:58:59.437115 systemd-logind[1974]: Removed session 9. Nov 23 22:59:00.252296 kubelet[3322]: E1123 22:59:00.251351 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 22:59:02.132472 ntpd[2161]: Listen normally on 6 vxlan.calico 192.168.89.128:123 Nov 23 22:59:02.132640 ntpd[2161]: Listen normally on 7 cali26690c1cf35 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 6 vxlan.calico 192.168.89.128:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 7 cali26690c1cf35 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 8 vxlan.calico [fe80::6450:33ff:fe46:bf5f%5]:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 9 calia0516560302 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 10 cali0b15b959a9d [fe80::ecee:eeff:feee:eeee%9]:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 11 cali79749b4169b [fe80::ecee:eeff:feee:eeee%10]:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 12 cali07a6c98d6af [fe80::ecee:eeff:feee:eeee%11]:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 13 cali3222e280c00 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 14 cali2a3400e38ec [fe80::ecee:eeff:feee:eeee%13]:123 Nov 23 22:59:02.133052 ntpd[2161]: 23 Nov 22:59:02 ntpd[2161]: Listen normally on 15 calic0e57ab8b5e [fe80::ecee:eeff:feee:eeee%14]:123 Nov 23 22:59:02.132692 ntpd[2161]: Listen normally on 8 vxlan.calico [fe80::6450:33ff:fe46:bf5f%5]:123 Nov 23 22:59:02.132738 ntpd[2161]: Listen normally on 9 calia0516560302 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 23 22:59:02.132781 ntpd[2161]: Listen normally on 10 cali0b15b959a9d [fe80::ecee:eeff:feee:eeee%9]:123 Nov 23 22:59:02.132826 ntpd[2161]: Listen normally on 11 cali79749b4169b [fe80::ecee:eeff:feee:eeee%10]:123 Nov 23 22:59:02.132871 ntpd[2161]: Listen normally on 12 cali07a6c98d6af [fe80::ecee:eeff:feee:eeee%11]:123 Nov 23 22:59:02.132914 ntpd[2161]: Listen normally on 13 cali3222e280c00 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 23 22:59:02.132958 ntpd[2161]: Listen normally on 14 cali2a3400e38ec [fe80::ecee:eeff:feee:eeee%13]:123 Nov 23 22:59:02.133014 ntpd[2161]: Listen normally on 15 calic0e57ab8b5e [fe80::ecee:eeff:feee:eeee%14]:123 Nov 23 22:59:04.448331 systemd[1]: Started sshd@9-172.31.17.147:22-139.178.68.195:45106.service - OpenSSH per-connection server daemon (139.178.68.195:45106). Nov 23 22:59:04.651603 sshd[5681]: Accepted publickey for core from 139.178.68.195 port 45106 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:04.654385 sshd-session[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:04.663803 systemd-logind[1974]: New session 10 of user core. Nov 23 22:59:04.670497 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 22:59:04.948727 sshd[5684]: Connection closed by 139.178.68.195 port 45106 Nov 23 22:59:04.949537 sshd-session[5681]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:04.957447 systemd-logind[1974]: Session 10 logged out. Waiting for processes to exit. Nov 23 22:59:04.957771 systemd[1]: sshd@9-172.31.17.147:22-139.178.68.195:45106.service: Deactivated successfully. Nov 23 22:59:04.962885 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 22:59:04.966175 systemd-logind[1974]: Removed session 10. Nov 23 22:59:04.985595 systemd[1]: Started sshd@10-172.31.17.147:22-139.178.68.195:45112.service - OpenSSH per-connection server daemon (139.178.68.195:45112). Nov 23 22:59:05.188223 sshd[5697]: Accepted publickey for core from 139.178.68.195 port 45112 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:05.190661 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:05.201740 systemd-logind[1974]: New session 11 of user core. Nov 23 22:59:05.210547 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 22:59:05.561558 sshd[5700]: Connection closed by 139.178.68.195 port 45112 Nov 23 22:59:05.564539 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:05.574935 systemd[1]: sshd@10-172.31.17.147:22-139.178.68.195:45112.service: Deactivated successfully. Nov 23 22:59:05.582804 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 22:59:05.587626 systemd-logind[1974]: Session 11 logged out. Waiting for processes to exit. Nov 23 22:59:05.612741 systemd[1]: Started sshd@11-172.31.17.147:22-139.178.68.195:45114.service - OpenSSH per-connection server daemon (139.178.68.195:45114). Nov 23 22:59:05.616476 systemd-logind[1974]: Removed session 11. Nov 23 22:59:05.821103 sshd[5713]: Accepted publickey for core from 139.178.68.195 port 45114 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:05.826386 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:05.838096 systemd-logind[1974]: New session 12 of user core. Nov 23 22:59:05.844544 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 22:59:06.108290 sshd[5716]: Connection closed by 139.178.68.195 port 45114 Nov 23 22:59:06.107225 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:06.116969 systemd[1]: sshd@11-172.31.17.147:22-139.178.68.195:45114.service: Deactivated successfully. Nov 23 22:59:06.123644 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 22:59:06.127819 systemd-logind[1974]: Session 12 logged out. Waiting for processes to exit. Nov 23 22:59:06.131076 systemd-logind[1974]: Removed session 12. Nov 23 22:59:06.747702 containerd[2005]: time="2025-11-23T22:59:06.747644579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:59:06.990130 containerd[2005]: time="2025-11-23T22:59:06.990035016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:06.992620 containerd[2005]: time="2025-11-23T22:59:06.992545392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:59:06.992745 containerd[2005]: time="2025-11-23T22:59:06.992671092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:59:06.992996 kubelet[3322]: E1123 22:59:06.992935 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:06.994196 kubelet[3322]: E1123 22:59:06.993025 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:06.994196 kubelet[3322]: E1123 22:59:06.993288 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:49a5b0f5daf440afb726a29c7c6e8f8b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tm79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69f9f4876b-55rzk_calico-system(30c50e65-a97a-4ae6-b165-6f81318bd6a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:06.998347 containerd[2005]: time="2025-11-23T22:59:06.998159376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:59:07.245703 containerd[2005]: time="2025-11-23T22:59:07.245625154Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:07.248030 containerd[2005]: time="2025-11-23T22:59:07.247956934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:59:07.248148 containerd[2005]: time="2025-11-23T22:59:07.248069098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:07.248646 kubelet[3322]: E1123 22:59:07.248468 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:07.249402 kubelet[3322]: E1123 22:59:07.248534 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:07.249402 kubelet[3322]: E1123 22:59:07.249112 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tm79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69f9f4876b-55rzk_calico-system(30c50e65-a97a-4ae6-b165-6f81318bd6a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:07.250485 kubelet[3322]: E1123 22:59:07.250402 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 22:59:07.749429 containerd[2005]: time="2025-11-23T22:59:07.749331420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:08.036973 containerd[2005]: time="2025-11-23T22:59:08.036902722Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:08.039451 containerd[2005]: time="2025-11-23T22:59:08.039309466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:08.039451 containerd[2005]: time="2025-11-23T22:59:08.039376066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:08.039730 kubelet[3322]: E1123 22:59:08.039581 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:08.039730 kubelet[3322]: E1123 22:59:08.039640 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:08.040242 kubelet[3322]: E1123 22:59:08.039821 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqkhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-855476946d-hc826_calico-apiserver(ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:08.041918 kubelet[3322]: E1123 22:59:08.041862 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 22:59:09.747899 containerd[2005]: time="2025-11-23T22:59:09.747806486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:09.977441 containerd[2005]: time="2025-11-23T22:59:09.977350191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:09.979809 containerd[2005]: time="2025-11-23T22:59:09.979738575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:09.980112 containerd[2005]: time="2025-11-23T22:59:09.979787187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:09.980211 kubelet[3322]: E1123 22:59:09.980132 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:09.980211 kubelet[3322]: E1123 22:59:09.980192 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:09.981479 kubelet[3322]: E1123 22:59:09.981068 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8ffj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-855476946d-znnxr_calico-apiserver(d24d7369-6494-4a66-8309-347720b5fc56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:09.982799 kubelet[3322]: E1123 22:59:09.982732 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 22:59:11.148711 systemd[1]: Started sshd@12-172.31.17.147:22-139.178.68.195:54898.service - OpenSSH per-connection server daemon (139.178.68.195:54898). Nov 23 22:59:11.345211 sshd[5736]: Accepted publickey for core from 139.178.68.195 port 54898 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:11.347564 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:11.357434 systemd-logind[1974]: New session 13 of user core. Nov 23 22:59:11.363563 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 22:59:11.616772 sshd[5739]: Connection closed by 139.178.68.195 port 54898 Nov 23 22:59:11.617820 sshd-session[5736]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:11.628223 systemd[1]: sshd@12-172.31.17.147:22-139.178.68.195:54898.service: Deactivated successfully. Nov 23 22:59:11.633205 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 22:59:11.635604 systemd-logind[1974]: Session 13 logged out. Waiting for processes to exit. Nov 23 22:59:11.639542 systemd-logind[1974]: Removed session 13. Nov 23 22:59:11.753344 containerd[2005]: time="2025-11-23T22:59:11.751828492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:59:12.038861 containerd[2005]: time="2025-11-23T22:59:12.038787098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:12.041757 containerd[2005]: time="2025-11-23T22:59:12.041691146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:59:12.041875 containerd[2005]: time="2025-11-23T22:59:12.041805662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:12.042170 kubelet[3322]: E1123 22:59:12.042099 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:12.042773 kubelet[3322]: E1123 22:59:12.042170 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:12.042773 kubelet[3322]: E1123 22:59:12.042391 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cf7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d46955649-8px8j_calico-system(efcb5707-de3f-40a1-84e7-2d29faf16856): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:12.044349 kubelet[3322]: E1123 22:59:12.044063 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 22:59:12.747581 containerd[2005]: time="2025-11-23T22:59:12.747194765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:59:13.057309 containerd[2005]: time="2025-11-23T22:59:13.057163071Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:13.060060 containerd[2005]: time="2025-11-23T22:59:13.059944143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:59:13.060060 containerd[2005]: time="2025-11-23T22:59:13.060007299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:13.061361 kubelet[3322]: E1123 22:59:13.060479 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:13.061361 kubelet[3322]: E1123 22:59:13.060574 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:13.061361 kubelet[3322]: E1123 22:59:13.060746 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq7x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zrbmg_calico-system(328f5f71-5736-4873-add1-f3d5d3b7eef2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:13.062741 kubelet[3322]: E1123 22:59:13.062600 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 22:59:14.748122 containerd[2005]: time="2025-11-23T22:59:14.747965839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:59:15.011495 containerd[2005]: time="2025-11-23T22:59:15.011320372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:15.014147 containerd[2005]: time="2025-11-23T22:59:15.013942576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:59:15.014439 containerd[2005]: time="2025-11-23T22:59:15.014008444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:59:15.014911 kubelet[3322]: E1123 22:59:15.014805 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:15.016458 kubelet[3322]: E1123 22:59:15.014919 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:15.016458 kubelet[3322]: E1123 22:59:15.015446 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pvh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:15.020448 containerd[2005]: time="2025-11-23T22:59:15.020392528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:59:15.284828 containerd[2005]: time="2025-11-23T22:59:15.284748894Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:15.287149 containerd[2005]: time="2025-11-23T22:59:15.287082690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:59:15.287281 containerd[2005]: time="2025-11-23T22:59:15.287202678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:59:15.287477 kubelet[3322]: E1123 22:59:15.287421 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:15.287550 kubelet[3322]: E1123 22:59:15.287490 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:15.287749 kubelet[3322]: E1123 22:59:15.287643 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pvh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:15.289642 kubelet[3322]: E1123 22:59:15.289530 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:59:16.657078 systemd[1]: Started sshd@13-172.31.17.147:22-139.178.68.195:54904.service - OpenSSH per-connection server daemon (139.178.68.195:54904). Nov 23 22:59:16.867751 sshd[5757]: Accepted publickey for core from 139.178.68.195 port 54904 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:16.870137 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:16.878738 systemd-logind[1974]: New session 14 of user core. Nov 23 22:59:16.886491 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 22:59:17.162021 sshd[5760]: Connection closed by 139.178.68.195 port 54904 Nov 23 22:59:17.162892 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:17.169200 systemd-logind[1974]: Session 14 logged out. Waiting for processes to exit. Nov 23 22:59:17.170203 systemd[1]: sshd@13-172.31.17.147:22-139.178.68.195:54904.service: Deactivated successfully. Nov 23 22:59:17.174367 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 22:59:17.182589 systemd-logind[1974]: Removed session 14. Nov 23 22:59:20.747295 kubelet[3322]: E1123 22:59:20.747033 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 22:59:20.748877 kubelet[3322]: E1123 22:59:20.748630 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 22:59:22.204854 systemd[1]: Started sshd@14-172.31.17.147:22-139.178.68.195:36618.service - OpenSSH per-connection server daemon (139.178.68.195:36618). Nov 23 22:59:22.421295 sshd[5800]: Accepted publickey for core from 139.178.68.195 port 36618 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:22.425343 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:22.437935 systemd-logind[1974]: New session 15 of user core. Nov 23 22:59:22.446870 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 22:59:22.757422 sshd[5803]: Connection closed by 139.178.68.195 port 36618 Nov 23 22:59:22.761562 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:22.769704 systemd[1]: sshd@14-172.31.17.147:22-139.178.68.195:36618.service: Deactivated successfully. Nov 23 22:59:22.778613 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 22:59:22.784360 systemd-logind[1974]: Session 15 logged out. Waiting for processes to exit. Nov 23 22:59:22.790410 systemd-logind[1974]: Removed session 15. Nov 23 22:59:24.748378 kubelet[3322]: E1123 22:59:24.748147 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 22:59:25.749205 kubelet[3322]: E1123 22:59:25.749001 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 22:59:27.751146 kubelet[3322]: E1123 22:59:27.750605 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 22:59:27.799418 systemd[1]: Started sshd@15-172.31.17.147:22-139.178.68.195:36624.service - OpenSSH per-connection server daemon (139.178.68.195:36624). Nov 23 22:59:28.022631 sshd[5818]: Accepted publickey for core from 139.178.68.195 port 36624 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:28.026033 sshd-session[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:28.037373 systemd-logind[1974]: New session 16 of user core. Nov 23 22:59:28.042557 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 22:59:28.357301 sshd[5821]: Connection closed by 139.178.68.195 port 36624 Nov 23 22:59:28.358561 sshd-session[5818]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:28.368435 systemd-logind[1974]: Session 16 logged out. Waiting for processes to exit. Nov 23 22:59:28.369773 systemd[1]: sshd@15-172.31.17.147:22-139.178.68.195:36624.service: Deactivated successfully. Nov 23 22:59:28.378311 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 22:59:28.412645 systemd-logind[1974]: Removed session 16. Nov 23 22:59:28.416809 systemd[1]: Started sshd@16-172.31.17.147:22-139.178.68.195:36626.service - OpenSSH per-connection server daemon (139.178.68.195:36626). Nov 23 22:59:28.626118 sshd[5833]: Accepted publickey for core from 139.178.68.195 port 36626 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:28.628896 sshd-session[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:28.641038 systemd-logind[1974]: New session 17 of user core. Nov 23 22:59:28.649997 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 22:59:28.751009 kubelet[3322]: E1123 22:59:28.750820 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:59:29.221049 sshd[5836]: Connection closed by 139.178.68.195 port 36626 Nov 23 22:59:29.221615 sshd-session[5833]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:29.231085 systemd-logind[1974]: Session 17 logged out. Waiting for processes to exit. Nov 23 22:59:29.233271 systemd[1]: sshd@16-172.31.17.147:22-139.178.68.195:36626.service: Deactivated successfully. Nov 23 22:59:29.241417 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 22:59:29.264929 systemd-logind[1974]: Removed session 17. Nov 23 22:59:29.266989 systemd[1]: Started sshd@17-172.31.17.147:22-139.178.68.195:36640.service - OpenSSH per-connection server daemon (139.178.68.195:36640). Nov 23 22:59:29.469582 sshd[5846]: Accepted publickey for core from 139.178.68.195 port 36640 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:29.472752 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:29.485896 systemd-logind[1974]: New session 18 of user core. Nov 23 22:59:29.493611 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 22:59:30.962590 sshd[5849]: Connection closed by 139.178.68.195 port 36640 Nov 23 22:59:30.966868 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:30.980233 systemd[1]: sshd@17-172.31.17.147:22-139.178.68.195:36640.service: Deactivated successfully. Nov 23 22:59:30.980942 systemd-logind[1974]: Session 18 logged out. Waiting for processes to exit. Nov 23 22:59:30.992687 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 22:59:31.018508 systemd-logind[1974]: Removed session 18. Nov 23 22:59:31.024763 systemd[1]: Started sshd@18-172.31.17.147:22-139.178.68.195:45600.service - OpenSSH per-connection server daemon (139.178.68.195:45600). Nov 23 22:59:31.251976 sshd[5866]: Accepted publickey for core from 139.178.68.195 port 45600 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:31.254773 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:31.268648 systemd-logind[1974]: New session 19 of user core. Nov 23 22:59:31.276582 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 22:59:32.007125 sshd[5869]: Connection closed by 139.178.68.195 port 45600 Nov 23 22:59:32.008622 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:32.023207 systemd[1]: sshd@18-172.31.17.147:22-139.178.68.195:45600.service: Deactivated successfully. Nov 23 22:59:32.029656 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 22:59:32.036653 systemd-logind[1974]: Session 19 logged out. Waiting for processes to exit. Nov 23 22:59:32.061016 systemd[1]: Started sshd@19-172.31.17.147:22-139.178.68.195:45616.service - OpenSSH per-connection server daemon (139.178.68.195:45616). Nov 23 22:59:32.067309 systemd-logind[1974]: Removed session 19. Nov 23 22:59:32.277089 sshd[5880]: Accepted publickey for core from 139.178.68.195 port 45616 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:32.281014 sshd-session[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:32.295628 systemd-logind[1974]: New session 20 of user core. Nov 23 22:59:32.304584 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 22:59:32.602704 sshd[5883]: Connection closed by 139.178.68.195 port 45616 Nov 23 22:59:32.603901 sshd-session[5880]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:32.616098 systemd[1]: sshd@19-172.31.17.147:22-139.178.68.195:45616.service: Deactivated successfully. Nov 23 22:59:32.624636 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 22:59:32.631411 systemd-logind[1974]: Session 20 logged out. Waiting for processes to exit. Nov 23 22:59:32.636138 systemd-logind[1974]: Removed session 20. Nov 23 22:59:34.750213 containerd[2005]: time="2025-11-23T22:59:34.749968238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:35.023202 containerd[2005]: time="2025-11-23T22:59:35.023049864Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:35.025676 containerd[2005]: time="2025-11-23T22:59:35.025592640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:35.025907 containerd[2005]: time="2025-11-23T22:59:35.025643328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:35.026053 kubelet[3322]: E1123 22:59:35.025992 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:35.027826 kubelet[3322]: E1123 22:59:35.026083 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:35.027826 kubelet[3322]: E1123 22:59:35.026394 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqkhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-855476946d-hc826_calico-apiserver(ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:35.027826 kubelet[3322]: E1123 22:59:35.027729 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 22:59:35.752620 containerd[2005]: time="2025-11-23T22:59:35.751712331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:59:36.021199 containerd[2005]: time="2025-11-23T22:59:36.021038449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:36.023286 containerd[2005]: time="2025-11-23T22:59:36.023167153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:59:36.023506 containerd[2005]: time="2025-11-23T22:59:36.023189785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:59:36.023565 kubelet[3322]: E1123 22:59:36.023487 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:36.023565 kubelet[3322]: E1123 22:59:36.023548 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:36.023779 kubelet[3322]: E1123 22:59:36.023707 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:49a5b0f5daf440afb726a29c7c6e8f8b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tm79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69f9f4876b-55rzk_calico-system(30c50e65-a97a-4ae6-b165-6f81318bd6a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:36.028530 containerd[2005]: time="2025-11-23T22:59:36.027565813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:59:36.311018 containerd[2005]: time="2025-11-23T22:59:36.310810082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:36.313097 containerd[2005]: time="2025-11-23T22:59:36.313024538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:59:36.313097 containerd[2005]: time="2025-11-23T22:59:36.313081250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:36.313674 kubelet[3322]: E1123 22:59:36.313612 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:36.315859 kubelet[3322]: E1123 22:59:36.313683 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:36.315859 kubelet[3322]: E1123 22:59:36.313872 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tm79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69f9f4876b-55rzk_calico-system(30c50e65-a97a-4ae6-b165-6f81318bd6a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:36.315859 kubelet[3322]: E1123 22:59:36.315415 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 22:59:36.751050 containerd[2005]: time="2025-11-23T22:59:36.750904060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:37.053337 containerd[2005]: time="2025-11-23T22:59:37.053086718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:37.055714 containerd[2005]: time="2025-11-23T22:59:37.055601126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:37.055838 containerd[2005]: time="2025-11-23T22:59:37.055668914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:37.056581 kubelet[3322]: E1123 22:59:37.056494 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:37.056865 kubelet[3322]: E1123 22:59:37.056803 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:37.058712 kubelet[3322]: E1123 22:59:37.058584 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8ffj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-855476946d-znnxr_calico-apiserver(d24d7369-6494-4a66-8309-347720b5fc56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:37.060227 kubelet[3322]: E1123 22:59:37.060152 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 22:59:37.651379 systemd[1]: Started sshd@20-172.31.17.147:22-139.178.68.195:45632.service - OpenSSH per-connection server daemon (139.178.68.195:45632). Nov 23 22:59:37.870220 sshd[5903]: Accepted publickey for core from 139.178.68.195 port 45632 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:37.874554 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:37.887719 systemd-logind[1974]: New session 21 of user core. Nov 23 22:59:37.897617 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 23 22:59:38.414844 sshd[5906]: Connection closed by 139.178.68.195 port 45632 Nov 23 22:59:38.417663 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:38.427553 systemd[1]: sshd@20-172.31.17.147:22-139.178.68.195:45632.service: Deactivated successfully. Nov 23 22:59:38.436443 systemd[1]: session-21.scope: Deactivated successfully. Nov 23 22:59:38.441670 systemd-logind[1974]: Session 21 logged out. Waiting for processes to exit. Nov 23 22:59:38.445983 systemd-logind[1974]: Removed session 21. Nov 23 22:59:39.757314 containerd[2005]: time="2025-11-23T22:59:39.756955663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:59:40.084891 containerd[2005]: time="2025-11-23T22:59:40.084807461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:40.087205 containerd[2005]: time="2025-11-23T22:59:40.087109217Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:59:40.087448 containerd[2005]: time="2025-11-23T22:59:40.087250325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:40.087811 kubelet[3322]: E1123 22:59:40.087679 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:40.088361 kubelet[3322]: E1123 22:59:40.087850 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:40.089456 kubelet[3322]: E1123 22:59:40.089206 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq7x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zrbmg_calico-system(328f5f71-5736-4873-add1-f3d5d3b7eef2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:40.091529 kubelet[3322]: E1123 22:59:40.091440 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 22:59:40.748342 containerd[2005]: time="2025-11-23T22:59:40.748230392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:59:41.003043 containerd[2005]: time="2025-11-23T22:59:41.002754785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:41.005208 containerd[2005]: time="2025-11-23T22:59:41.005007725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:59:41.005208 containerd[2005]: time="2025-11-23T22:59:41.005138177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:41.005477 kubelet[3322]: E1123 22:59:41.005363 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:41.005477 kubelet[3322]: E1123 22:59:41.005424 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:41.005689 kubelet[3322]: E1123 22:59:41.005606 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cf7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d46955649-8px8j_calico-system(efcb5707-de3f-40a1-84e7-2d29faf16856): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:41.006947 kubelet[3322]: E1123 22:59:41.006875 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 22:59:43.226803 systemd[1]: Started sshd@21-172.31.17.147:22-139.178.68.195:47470.service - OpenSSH per-connection server daemon (139.178.68.195:47470). Nov 23 22:59:43.449466 sshd[5922]: Accepted publickey for core from 139.178.68.195 port 47470 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:43.454219 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:43.464340 systemd-logind[1974]: New session 22 of user core. Nov 23 22:59:43.472088 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 23 22:59:43.750828 containerd[2005]: time="2025-11-23T22:59:43.750700811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:59:43.767314 sshd[5925]: Connection closed by 139.178.68.195 port 47470 Nov 23 22:59:43.767861 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:43.780150 systemd[1]: sshd@21-172.31.17.147:22-139.178.68.195:47470.service: Deactivated successfully. Nov 23 22:59:43.790656 systemd[1]: session-22.scope: Deactivated successfully. Nov 23 22:59:43.799619 systemd-logind[1974]: Session 22 logged out. Waiting for processes to exit. Nov 23 22:59:43.808920 systemd-logind[1974]: Removed session 22. Nov 23 22:59:44.005357 containerd[2005]: time="2025-11-23T22:59:44.004615976Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:44.007200 containerd[2005]: time="2025-11-23T22:59:44.007062068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:59:44.007200 containerd[2005]: time="2025-11-23T22:59:44.007142672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:59:44.007641 kubelet[3322]: E1123 22:59:44.007578 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:44.009810 kubelet[3322]: E1123 22:59:44.009345 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:44.009810 kubelet[3322]: E1123 22:59:44.009712 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pvh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:44.015435 containerd[2005]: time="2025-11-23T22:59:44.015373736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:59:44.243856 containerd[2005]: time="2025-11-23T22:59:44.243774297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:44.246204 containerd[2005]: time="2025-11-23T22:59:44.246085762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:59:44.246204 containerd[2005]: time="2025-11-23T22:59:44.246163282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:59:44.246507 kubelet[3322]: E1123 22:59:44.246446 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:44.246588 kubelet[3322]: E1123 22:59:44.246517 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:44.246751 kubelet[3322]: E1123 22:59:44.246672 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pvh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:44.248450 kubelet[3322]: E1123 22:59:44.248351 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:59:48.746983 kubelet[3322]: E1123 22:59:48.746891 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 22:59:48.809724 systemd[1]: Started sshd@22-172.31.17.147:22-139.178.68.195:47480.service - OpenSSH per-connection server daemon (139.178.68.195:47480). Nov 23 22:59:49.036289 sshd[5937]: Accepted publickey for core from 139.178.68.195 port 47480 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:49.037904 sshd-session[5937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:49.049775 systemd-logind[1974]: New session 23 of user core. Nov 23 22:59:49.059616 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 23 22:59:49.605791 sshd[5940]: Connection closed by 139.178.68.195 port 47480 Nov 23 22:59:49.609606 sshd-session[5937]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:49.619959 systemd[1]: sshd@22-172.31.17.147:22-139.178.68.195:47480.service: Deactivated successfully. Nov 23 22:59:49.629854 systemd[1]: session-23.scope: Deactivated successfully. Nov 23 22:59:49.632430 systemd-logind[1974]: Session 23 logged out. Waiting for processes to exit. Nov 23 22:59:49.638224 systemd-logind[1974]: Removed session 23. Nov 23 22:59:49.751286 kubelet[3322]: E1123 22:59:49.751053 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 22:59:49.754410 kubelet[3322]: E1123 22:59:49.754319 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 22:59:52.749006 kubelet[3322]: E1123 22:59:52.748504 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 22:59:54.410975 systemd[1]: Started sshd@23-172.31.17.147:22-139.178.68.195:60844.service - OpenSSH per-connection server daemon (139.178.68.195:60844). Nov 23 22:59:54.640420 sshd[5976]: Accepted publickey for core from 139.178.68.195 port 60844 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:59:54.644068 sshd-session[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:54.653357 systemd-logind[1974]: New session 24 of user core. Nov 23 22:59:54.659547 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 23 22:59:54.937816 sshd[5979]: Connection closed by 139.178.68.195 port 60844 Nov 23 22:59:54.938943 sshd-session[5976]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:54.949077 systemd[1]: session-24.scope: Deactivated successfully. Nov 23 22:59:54.950445 systemd[1]: sshd@23-172.31.17.147:22-139.178.68.195:60844.service: Deactivated successfully. Nov 23 22:59:54.958980 systemd-logind[1974]: Session 24 logged out. Waiting for processes to exit. Nov 23 22:59:54.962963 systemd-logind[1974]: Removed session 24. Nov 23 22:59:55.748965 kubelet[3322]: E1123 22:59:55.748867 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 22:59:58.751300 kubelet[3322]: E1123 22:59:58.751159 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 22:59:59.980927 systemd[1]: Started sshd@24-172.31.17.147:22-139.178.68.195:60850.service - OpenSSH per-connection server daemon (139.178.68.195:60850). Nov 23 23:00:00.186540 sshd[5994]: Accepted publickey for core from 139.178.68.195 port 60850 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:00.190483 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:00.204387 systemd-logind[1974]: New session 25 of user core. Nov 23 23:00:00.207584 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 23 23:00:00.522464 sshd[5997]: Connection closed by 139.178.68.195 port 60850 Nov 23 23:00:00.523364 sshd-session[5994]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:00.531164 systemd[1]: sshd@24-172.31.17.147:22-139.178.68.195:60850.service: Deactivated successfully. Nov 23 23:00:00.537574 systemd[1]: session-25.scope: Deactivated successfully. Nov 23 23:00:00.542539 systemd-logind[1974]: Session 25 logged out. Waiting for processes to exit. Nov 23 23:00:00.545672 systemd-logind[1974]: Removed session 25. Nov 23 23:00:01.750216 kubelet[3322]: E1123 23:00:01.750150 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 23:00:01.753215 kubelet[3322]: E1123 23:00:01.753146 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 23:00:03.755811 kubelet[3322]: E1123 23:00:03.755741 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 23:00:04.746494 update_engine[1975]: I20251123 23:00:04.746407 1975 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 23 23:00:04.746494 update_engine[1975]: I20251123 23:00:04.746488 1975 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 23 23:00:04.748473 update_engine[1975]: I20251123 23:00:04.746933 1975 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 23 23:00:04.748963 update_engine[1975]: I20251123 23:00:04.748903 1975 omaha_request_params.cc:62] Current group set to beta Nov 23 23:00:04.749314 update_engine[1975]: I20251123 23:00:04.749078 1975 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 23 23:00:04.749314 update_engine[1975]: I20251123 23:00:04.749109 1975 update_attempter.cc:643] Scheduling an action processor start. Nov 23 23:00:04.749314 update_engine[1975]: I20251123 23:00:04.749145 1975 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 23 23:00:04.749314 update_engine[1975]: I20251123 23:00:04.749207 1975 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 23 23:00:04.749575 update_engine[1975]: I20251123 23:00:04.749423 1975 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 23 23:00:04.749575 update_engine[1975]: I20251123 23:00:04.749446 1975 omaha_request_action.cc:272] Request: Nov 23 23:00:04.749575 update_engine[1975]: Nov 23 23:00:04.749575 update_engine[1975]: Nov 23 23:00:04.749575 update_engine[1975]: Nov 23 23:00:04.749575 update_engine[1975]: Nov 23 23:00:04.749575 update_engine[1975]: Nov 23 23:00:04.749575 update_engine[1975]: Nov 23 23:00:04.749575 update_engine[1975]: Nov 23 23:00:04.749575 update_engine[1975]: Nov 23 23:00:04.749575 update_engine[1975]: I20251123 23:00:04.749461 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 23 23:00:04.753583 locksmithd[2025]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 23 23:00:04.757242 update_engine[1975]: I20251123 23:00:04.757153 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 23 23:00:04.759657 update_engine[1975]: I20251123 23:00:04.759575 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 23 23:00:04.791981 update_engine[1975]: E20251123 23:00:04.791879 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 23 23:00:04.792153 update_engine[1975]: I20251123 23:00:04.792034 1975 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 23 23:00:05.564453 systemd[1]: Started sshd@25-172.31.17.147:22-139.178.68.195:54082.service - OpenSSH per-connection server daemon (139.178.68.195:54082). Nov 23 23:00:05.750031 kubelet[3322]: E1123 23:00:05.749480 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 23:00:05.782083 sshd[6009]: Accepted publickey for core from 139.178.68.195 port 54082 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:05.784597 sshd-session[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:05.799400 systemd-logind[1974]: New session 26 of user core. Nov 23 23:00:05.809571 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 23 23:00:06.129398 sshd[6012]: Connection closed by 139.178.68.195 port 54082 Nov 23 23:00:06.130558 sshd-session[6009]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:06.140956 systemd[1]: sshd@25-172.31.17.147:22-139.178.68.195:54082.service: Deactivated successfully. Nov 23 23:00:06.149065 systemd[1]: session-26.scope: Deactivated successfully. Nov 23 23:00:06.154651 systemd-logind[1974]: Session 26 logged out. Waiting for processes to exit. Nov 23 23:00:06.158976 systemd-logind[1974]: Removed session 26. Nov 23 23:00:10.748101 kubelet[3322]: E1123 23:00:10.747230 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 23:00:13.750698 kubelet[3322]: E1123 23:00:13.750307 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 23:00:13.754107 kubelet[3322]: E1123 23:00:13.753829 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 23:00:14.745683 update_engine[1975]: I20251123 23:00:14.744814 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 23 23:00:14.745683 update_engine[1975]: I20251123 23:00:14.744938 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 23 23:00:14.745683 update_engine[1975]: I20251123 23:00:14.745549 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 23 23:00:14.747101 update_engine[1975]: E20251123 23:00:14.746903 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 23 23:00:14.747101 update_engine[1975]: I20251123 23:00:14.747031 1975 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 23 23:00:14.748234 kubelet[3322]: E1123 23:00:14.748137 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 23:00:15.748907 containerd[2005]: time="2025-11-23T23:00:15.748747422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:16.405416 containerd[2005]: time="2025-11-23T23:00:16.405284369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:16.407643 containerd[2005]: time="2025-11-23T23:00:16.407574653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:16.407757 containerd[2005]: time="2025-11-23T23:00:16.407692313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:16.407950 kubelet[3322]: E1123 23:00:16.407895 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:16.408526 kubelet[3322]: E1123 23:00:16.407964 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:16.408526 kubelet[3322]: E1123 23:00:16.408138 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqkhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-855476946d-hc826_calico-apiserver(ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:16.409435 kubelet[3322]: E1123 23:00:16.409387 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 23:00:17.746863 kubelet[3322]: E1123 23:00:17.746800 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 23:00:19.742836 systemd[1]: cri-containerd-e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85.scope: Deactivated successfully. Nov 23 23:00:19.744176 systemd[1]: cri-containerd-e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85.scope: Consumed 28.491s CPU time, 102.4M memory peak. Nov 23 23:00:19.755526 containerd[2005]: time="2025-11-23T23:00:19.755429110Z" level=info msg="received container exit event container_id:\"e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85\" id:\"e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85\" pid:3915 exit_status:1 exited_at:{seconds:1763938819 nanos:754088866}" Nov 23 23:00:19.803338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85-rootfs.mount: Deactivated successfully. Nov 23 23:00:20.533975 kubelet[3322]: I1123 23:00:20.533918 3322 scope.go:117] "RemoveContainer" containerID="e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85" Nov 23 23:00:20.537277 containerd[2005]: time="2025-11-23T23:00:20.537206614Z" level=info msg="CreateContainer within sandbox \"c1e31043169603e3e275b0b4b4fd4faa455ae5234d34c099c5bf7954ab96914e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 23 23:00:20.554978 containerd[2005]: time="2025-11-23T23:00:20.554283094Z" level=info msg="Container f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:20.565243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount418180258.mount: Deactivated successfully. Nov 23 23:00:20.572987 containerd[2005]: time="2025-11-23T23:00:20.572818978Z" level=info msg="CreateContainer within sandbox \"c1e31043169603e3e275b0b4b4fd4faa455ae5234d34c099c5bf7954ab96914e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f\"" Nov 23 23:00:20.574037 containerd[2005]: time="2025-11-23T23:00:20.573988906Z" level=info msg="StartContainer for \"f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f\"" Nov 23 23:00:20.575739 containerd[2005]: time="2025-11-23T23:00:20.575682910Z" level=info msg="connecting to shim f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f" address="unix:///run/containerd/s/57200b57480db28b6d5ba0e31343d1efb222f2c63e32f69963c593f0727904b3" protocol=ttrpc version=3 Nov 23 23:00:20.618584 systemd[1]: Started cri-containerd-f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f.scope - libcontainer container f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f. Nov 23 23:00:20.682923 containerd[2005]: time="2025-11-23T23:00:20.682830046Z" level=info msg="StartContainer for \"f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f\" returns successfully" Nov 23 23:00:20.803462 systemd[1]: cri-containerd-5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5.scope: Deactivated successfully. Nov 23 23:00:20.805443 systemd[1]: cri-containerd-5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5.scope: Consumed 5.567s CPU time, 55.3M memory peak. Nov 23 23:00:20.809708 containerd[2005]: time="2025-11-23T23:00:20.809635151Z" level=info msg="received container exit event container_id:\"5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5\" id:\"5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5\" pid:3169 exit_status:1 exited_at:{seconds:1763938820 nanos:808093907}" Nov 23 23:00:20.854858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5-rootfs.mount: Deactivated successfully. Nov 23 23:00:21.553231 kubelet[3322]: I1123 23:00:21.553174 3322 scope.go:117] "RemoveContainer" containerID="5726c424b94a5d04022d9e437b6be8fa33bd4ac78bc3c5ff88c2c094457e7cb5" Nov 23 23:00:21.559502 containerd[2005]: time="2025-11-23T23:00:21.559443479Z" level=info msg="CreateContainer within sandbox \"5b5f39c35ad7ee7c309522d7ddc95cf5063be0bd2499e41c96165fadf9683e00\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 23 23:00:21.581284 containerd[2005]: time="2025-11-23T23:00:21.579810959Z" level=info msg="Container fd9537ea61744338f54cae18a3ae2feadae6b8494ef929c5e4887b56daab4ca2: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:21.601193 containerd[2005]: time="2025-11-23T23:00:21.601118507Z" level=info msg="CreateContainer within sandbox \"5b5f39c35ad7ee7c309522d7ddc95cf5063be0bd2499e41c96165fadf9683e00\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fd9537ea61744338f54cae18a3ae2feadae6b8494ef929c5e4887b56daab4ca2\"" Nov 23 23:00:21.602210 containerd[2005]: time="2025-11-23T23:00:21.602149091Z" level=info msg="StartContainer for \"fd9537ea61744338f54cae18a3ae2feadae6b8494ef929c5e4887b56daab4ca2\"" Nov 23 23:00:21.604693 containerd[2005]: time="2025-11-23T23:00:21.604618223Z" level=info msg="connecting to shim fd9537ea61744338f54cae18a3ae2feadae6b8494ef929c5e4887b56daab4ca2" address="unix:///run/containerd/s/aa0d46234440cdaf8fdb558b773555e13d1fd0ccadcee72ecde4201500538e6a" protocol=ttrpc version=3 Nov 23 23:00:21.653568 systemd[1]: Started cri-containerd-fd9537ea61744338f54cae18a3ae2feadae6b8494ef929c5e4887b56daab4ca2.scope - libcontainer container fd9537ea61744338f54cae18a3ae2feadae6b8494ef929c5e4887b56daab4ca2. Nov 23 23:00:21.753795 containerd[2005]: time="2025-11-23T23:00:21.752421288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:00:21.778234 containerd[2005]: time="2025-11-23T23:00:21.778148376Z" level=info msg="StartContainer for \"fd9537ea61744338f54cae18a3ae2feadae6b8494ef929c5e4887b56daab4ca2\" returns successfully" Nov 23 23:00:21.921994 kubelet[3322]: E1123 23:00:21.920949 3322 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-147?timeout=10s\": context deadline exceeded" Nov 23 23:00:22.034515 containerd[2005]: time="2025-11-23T23:00:22.034303773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:22.036752 containerd[2005]: time="2025-11-23T23:00:22.036535905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:00:22.036752 containerd[2005]: time="2025-11-23T23:00:22.036557325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:00:22.037222 kubelet[3322]: E1123 23:00:22.037121 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:22.037222 kubelet[3322]: E1123 23:00:22.037186 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:22.037907 kubelet[3322]: E1123 23:00:22.037829 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cf7pz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5d46955649-8px8j_calico-system(efcb5707-de3f-40a1-84e7-2d29faf16856): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:22.039328 kubelet[3322]: E1123 23:00:22.039247 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 23:00:24.745392 update_engine[1975]: I20251123 23:00:24.745296 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 23 23:00:24.745928 update_engine[1975]: I20251123 23:00:24.745409 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 23 23:00:24.747337 update_engine[1975]: I20251123 23:00:24.746490 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 23 23:00:24.754127 update_engine[1975]: E20251123 23:00:24.753926 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 23 23:00:24.754127 update_engine[1975]: I20251123 23:00:24.754066 1975 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 23 23:00:25.748763 containerd[2005]: time="2025-11-23T23:00:25.748686172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:00:25.888576 systemd[1]: cri-containerd-e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36.scope: Deactivated successfully. Nov 23 23:00:25.889978 systemd[1]: cri-containerd-e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36.scope: Consumed 4.124s CPU time, 20.8M memory peak. Nov 23 23:00:25.893917 containerd[2005]: time="2025-11-23T23:00:25.892964008Z" level=info msg="received container exit event container_id:\"e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36\" id:\"e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36\" pid:3161 exit_status:1 exited_at:{seconds:1763938825 nanos:892108816}" Nov 23 23:00:25.938634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36-rootfs.mount: Deactivated successfully. Nov 23 23:00:26.041516 containerd[2005]: time="2025-11-23T23:00:26.041451637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:26.042923 containerd[2005]: time="2025-11-23T23:00:26.042791485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:00:26.043084 containerd[2005]: time="2025-11-23T23:00:26.042877417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:00:26.043149 kubelet[3322]: E1123 23:00:26.043099 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:00:26.044445 kubelet[3322]: E1123 23:00:26.043161 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:00:26.044445 kubelet[3322]: E1123 23:00:26.043459 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:49a5b0f5daf440afb726a29c7c6e8f8b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tm79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69f9f4876b-55rzk_calico-system(30c50e65-a97a-4ae6-b165-6f81318bd6a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:26.044903 containerd[2005]: time="2025-11-23T23:00:26.043848997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:00:26.326897 containerd[2005]: time="2025-11-23T23:00:26.326696187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:26.327814 containerd[2005]: time="2025-11-23T23:00:26.327737823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:00:26.327950 containerd[2005]: time="2025-11-23T23:00:26.327863739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:00:26.328135 kubelet[3322]: E1123 23:00:26.328076 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:00:26.328239 kubelet[3322]: E1123 23:00:26.328144 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:00:26.328547 kubelet[3322]: E1123 23:00:26.328471 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pvh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:26.329353 containerd[2005]: time="2025-11-23T23:00:26.328838379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:00:26.576686 kubelet[3322]: I1123 23:00:26.576636 3322 scope.go:117] "RemoveContainer" containerID="e2b21cbd082081419c9c93501dba5db2f5cefdf632025633025b674d047dde36" Nov 23 23:00:26.580310 containerd[2005]: time="2025-11-23T23:00:26.580114564Z" level=info msg="CreateContainer within sandbox \"00215b53c538eb97cc99bfa983e79880a87a254c63092f4eec162afa371b3ea4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 23 23:00:26.593508 containerd[2005]: time="2025-11-23T23:00:26.593301148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:26.597282 containerd[2005]: time="2025-11-23T23:00:26.595436656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:00:26.597282 containerd[2005]: time="2025-11-23T23:00:26.595568932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:00:26.597282 containerd[2005]: time="2025-11-23T23:00:26.595910428Z" level=info msg="Container 9fd1cbae2a75385dc4e68185506b15c2eda8b91f07f9a2bbc4e68cf28f6a7c9d: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:26.598475 kubelet[3322]: E1123 23:00:26.598396 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:00:26.598608 kubelet[3322]: E1123 23:00:26.598490 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:00:26.598836 kubelet[3322]: E1123 23:00:26.598759 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tm79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-69f9f4876b-55rzk_calico-system(30c50e65-a97a-4ae6-b165-6f81318bd6a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:26.600079 containerd[2005]: time="2025-11-23T23:00:26.599145520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:00:26.600581 kubelet[3322]: E1123 23:00:26.600494 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 23:00:26.623050 containerd[2005]: time="2025-11-23T23:00:26.623000056Z" level=info msg="CreateContainer within sandbox \"00215b53c538eb97cc99bfa983e79880a87a254c63092f4eec162afa371b3ea4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9fd1cbae2a75385dc4e68185506b15c2eda8b91f07f9a2bbc4e68cf28f6a7c9d\"" Nov 23 23:00:26.624307 containerd[2005]: time="2025-11-23T23:00:26.623946400Z" level=info msg="StartContainer for \"9fd1cbae2a75385dc4e68185506b15c2eda8b91f07f9a2bbc4e68cf28f6a7c9d\"" Nov 23 23:00:26.626299 containerd[2005]: time="2025-11-23T23:00:26.626233876Z" level=info msg="connecting to shim 9fd1cbae2a75385dc4e68185506b15c2eda8b91f07f9a2bbc4e68cf28f6a7c9d" address="unix:///run/containerd/s/63baa3976780733990aedb1481ab4de9042cf93aa2327f6d133030eff7b437f4" protocol=ttrpc version=3 Nov 23 23:00:26.667577 systemd[1]: Started cri-containerd-9fd1cbae2a75385dc4e68185506b15c2eda8b91f07f9a2bbc4e68cf28f6a7c9d.scope - libcontainer container 9fd1cbae2a75385dc4e68185506b15c2eda8b91f07f9a2bbc4e68cf28f6a7c9d. Nov 23 23:00:26.760130 containerd[2005]: time="2025-11-23T23:00:26.760002737Z" level=info msg="StartContainer for \"9fd1cbae2a75385dc4e68185506b15c2eda8b91f07f9a2bbc4e68cf28f6a7c9d\" returns successfully" Nov 23 23:00:26.880027 containerd[2005]: time="2025-11-23T23:00:26.879869165Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:26.882957 containerd[2005]: time="2025-11-23T23:00:26.882844049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:00:26.882957 containerd[2005]: time="2025-11-23T23:00:26.882919061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:00:26.883995 kubelet[3322]: E1123 23:00:26.883248 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:00:26.883995 kubelet[3322]: E1123 23:00:26.883330 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:00:26.883995 kubelet[3322]: E1123 23:00:26.883673 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pvh6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rz2c9_calico-system(b6239d0a-f247-4ff7-8f39-2d2983756ead): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:26.884721 containerd[2005]: time="2025-11-23T23:00:26.884226017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:26.885702 kubelet[3322]: E1123 23:00:26.885624 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead" Nov 23 23:00:27.161122 containerd[2005]: time="2025-11-23T23:00:27.160861563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:27.163876 containerd[2005]: time="2025-11-23T23:00:27.163772187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:27.164094 containerd[2005]: time="2025-11-23T23:00:27.163820067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:27.164822 kubelet[3322]: E1123 23:00:27.164464 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:27.164822 kubelet[3322]: E1123 23:00:27.164526 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:27.164822 kubelet[3322]: E1123 23:00:27.164690 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8ffj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-855476946d-znnxr_calico-apiserver(d24d7369-6494-4a66-8309-347720b5fc56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:27.165983 kubelet[3322]: E1123 23:00:27.165924 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 23:00:28.746224 kubelet[3322]: E1123 23:00:28.746112 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-hc826" podUID="ebeea6c8-b9a6-4d9a-a1c8-ed3aa29510ef" Nov 23 23:00:29.747523 containerd[2005]: time="2025-11-23T23:00:29.747158924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:00:30.004736 containerd[2005]: time="2025-11-23T23:00:30.004573109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:30.006939 containerd[2005]: time="2025-11-23T23:00:30.006870569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:00:30.007056 containerd[2005]: time="2025-11-23T23:00:30.006999929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:30.007324 kubelet[3322]: E1123 23:00:30.007218 3322 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:00:30.007843 kubelet[3322]: E1123 23:00:30.007386 3322 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:00:30.007843 kubelet[3322]: E1123 23:00:30.007694 3322 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq7x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zrbmg_calico-system(328f5f71-5736-4873-add1-f3d5d3b7eef2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:30.009083 kubelet[3322]: E1123 23:00:30.009024 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zrbmg" podUID="328f5f71-5736-4873-add1-f3d5d3b7eef2" Nov 23 23:00:31.922293 kubelet[3322]: E1123 23:00:31.921925 3322 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-147?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 23:00:32.179104 systemd[1]: cri-containerd-f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f.scope: Deactivated successfully. Nov 23 23:00:32.181051 containerd[2005]: time="2025-11-23T23:00:32.180885188Z" level=info msg="received container exit event container_id:\"f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f\" id:\"f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f\" pid:6059 exit_status:1 exited_at:{seconds:1763938832 nanos:180066884}" Nov 23 23:00:32.223689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f-rootfs.mount: Deactivated successfully. Nov 23 23:00:32.605621 kubelet[3322]: I1123 23:00:32.605549 3322 scope.go:117] "RemoveContainer" containerID="e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85" Nov 23 23:00:32.606737 kubelet[3322]: I1123 23:00:32.606682 3322 scope.go:117] "RemoveContainer" containerID="f5edd83127aeb5b43e2ab5fd4a1adc1d0cee2ba72d57f5e552630f216a86eb2f" Nov 23 23:00:32.607343 kubelet[3322]: E1123 23:00:32.606925 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-vqptz_tigera-operator(b7a68a53-d574-4846-9408-c5e58911d7a5)\"" pod="tigera-operator/tigera-operator-7dcd859c48-vqptz" podUID="b7a68a53-d574-4846-9408-c5e58911d7a5" Nov 23 23:00:32.611234 containerd[2005]: time="2025-11-23T23:00:32.611139886Z" level=info msg="RemoveContainer for \"e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85\"" Nov 23 23:00:32.620999 containerd[2005]: time="2025-11-23T23:00:32.620821006Z" level=info msg="RemoveContainer for \"e591a27d73de6a87ffbc1faacb301d4fab7f1bf5bf969cc9191a443a7ef89a85\" returns successfully" Nov 23 23:00:33.746735 kubelet[3322]: E1123 23:00:33.746668 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5d46955649-8px8j" podUID="efcb5707-de3f-40a1-84e7-2d29faf16856" Nov 23 23:00:34.744833 update_engine[1975]: I20251123 23:00:34.744758 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 23 23:00:34.746138 update_engine[1975]: I20251123 23:00:34.745373 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 23 23:00:34.746138 update_engine[1975]: I20251123 23:00:34.745882 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 23 23:00:34.755323 update_engine[1975]: E20251123 23:00:34.754760 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 23 23:00:34.755323 update_engine[1975]: I20251123 23:00:34.754876 1975 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 23 23:00:34.755323 update_engine[1975]: I20251123 23:00:34.754894 1975 omaha_request_action.cc:617] Omaha request response: Nov 23 23:00:34.755323 update_engine[1975]: E20251123 23:00:34.755003 1975 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 23 23:00:34.755323 update_engine[1975]: I20251123 23:00:34.755040 1975 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 23 23:00:34.755323 update_engine[1975]: I20251123 23:00:34.755055 1975 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 23 23:00:34.755323 update_engine[1975]: I20251123 23:00:34.755069 1975 update_attempter.cc:306] Processing Done. Nov 23 23:00:34.755323 update_engine[1975]: E20251123 23:00:34.755092 1975 update_attempter.cc:619] Update failed. Nov 23 23:00:34.755323 update_engine[1975]: I20251123 23:00:34.755106 1975 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 23 23:00:34.755323 update_engine[1975]: I20251123 23:00:34.755120 1975 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 23 23:00:34.755323 update_engine[1975]: I20251123 23:00:34.755132 1975 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 23 23:00:34.755933 update_engine[1975]: I20251123 23:00:34.755240 1975 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 23 23:00:34.755933 update_engine[1975]: I20251123 23:00:34.755880 1975 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 23 23:00:34.755933 update_engine[1975]: I20251123 23:00:34.755904 1975 omaha_request_action.cc:272] Request: Nov 23 23:00:34.755933 update_engine[1975]: Nov 23 23:00:34.755933 update_engine[1975]: Nov 23 23:00:34.755933 update_engine[1975]: Nov 23 23:00:34.755933 update_engine[1975]: Nov 23 23:00:34.755933 update_engine[1975]: Nov 23 23:00:34.755933 update_engine[1975]: Nov 23 23:00:34.755933 update_engine[1975]: I20251123 23:00:34.755920 1975 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 23 23:00:34.756419 locksmithd[2025]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 23 23:00:34.756874 update_engine[1975]: I20251123 23:00:34.755964 1975 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 23 23:00:34.756874 update_engine[1975]: I20251123 23:00:34.756611 1975 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 23 23:00:34.757631 update_engine[1975]: E20251123 23:00:34.757584 1975 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 23 23:00:34.757719 update_engine[1975]: I20251123 23:00:34.757696 1975 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 23 23:00:34.757779 update_engine[1975]: I20251123 23:00:34.757716 1975 omaha_request_action.cc:617] Omaha request response: Nov 23 23:00:34.757779 update_engine[1975]: I20251123 23:00:34.757733 1975 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 23 23:00:34.757779 update_engine[1975]: I20251123 23:00:34.757746 1975 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 23 23:00:34.757779 update_engine[1975]: I20251123 23:00:34.757759 1975 update_attempter.cc:306] Processing Done. Nov 23 23:00:34.757779 update_engine[1975]: I20251123 23:00:34.757773 1975 update_attempter.cc:310] Error event sent. Nov 23 23:00:34.757998 update_engine[1975]: I20251123 23:00:34.757791 1975 update_check_scheduler.cc:74] Next update check in 46m51s Nov 23 23:00:34.758345 locksmithd[2025]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 23 23:00:38.747297 kubelet[3322]: E1123 23:00:38.747162 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-855476946d-znnxr" podUID="d24d7369-6494-4a66-8309-347720b5fc56" Nov 23 23:00:39.748237 kubelet[3322]: E1123 23:00:39.748113 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-69f9f4876b-55rzk" podUID="30c50e65-a97a-4ae6-b165-6f81318bd6a7" Nov 23 23:00:40.748097 kubelet[3322]: E1123 23:00:40.747988 3322 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rz2c9" podUID="b6239d0a-f247-4ff7-8f39-2d2983756ead"