Oct 29 23:30:13.164931 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 29 23:30:13.164975 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Oct 29 22:07:18 -00 2025 Oct 29 23:30:13.164999 kernel: KASLR disabled due to lack of seed Oct 29 23:30:13.165015 kernel: efi: EFI v2.7 by EDK II Oct 29 23:30:13.165031 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78557598 Oct 29 23:30:13.165046 kernel: secureboot: Secure boot disabled Oct 29 23:30:13.165063 kernel: ACPI: Early table checksum verification disabled Oct 29 23:30:13.165078 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 29 23:30:13.165093 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 29 23:30:13.165108 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 29 23:30:13.165123 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 29 23:30:13.165142 kernel: ACPI: FACS 0x0000000078630000 000040 Oct 29 23:30:13.165157 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 29 23:30:13.165173 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 29 23:30:13.165190 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 29 23:30:13.165206 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 29 23:30:13.165226 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 29 23:30:13.165242 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 29 23:30:13.165258 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 29 23:30:13.165274 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 29 23:30:13.165290 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 29 23:30:13.165306 kernel: printk: legacy bootconsole [uart0] enabled Oct 29 23:30:13.165322 kernel: ACPI: Use ACPI SPCR as default console: No Oct 29 23:30:13.165338 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 29 23:30:13.165354 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Oct 29 23:30:13.165370 kernel: Zone ranges: Oct 29 23:30:13.165387 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 29 23:30:13.165407 kernel: DMA32 empty Oct 29 23:30:13.165424 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 29 23:30:13.165439 kernel: Device empty Oct 29 23:30:13.165457 kernel: Movable zone start for each node Oct 29 23:30:13.165475 kernel: Early memory node ranges Oct 29 23:30:13.165491 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Oct 29 23:30:13.165507 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Oct 29 23:30:13.165522 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Oct 29 23:30:13.165539 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 29 23:30:13.165555 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 29 23:30:13.165572 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 29 23:30:13.165587 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 29 23:30:13.165608 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 29 23:30:13.165630 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 29 23:30:13.165680 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 29 23:30:13.165707 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Oct 29 23:30:13.165726 kernel: psci: probing for conduit method from ACPI. Oct 29 23:30:13.165749 kernel: psci: PSCIv1.0 detected in firmware. Oct 29 23:30:13.165767 kernel: psci: Using standard PSCI v0.2 function IDs Oct 29 23:30:13.165809 kernel: psci: Trusted OS migration not required Oct 29 23:30:13.165828 kernel: psci: SMC Calling Convention v1.1 Oct 29 23:30:13.165845 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Oct 29 23:30:13.165862 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Oct 29 23:30:13.165879 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Oct 29 23:30:13.165896 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 29 23:30:13.165913 kernel: Detected PIPT I-cache on CPU0 Oct 29 23:30:13.165929 kernel: CPU features: detected: GIC system register CPU interface Oct 29 23:30:13.165946 kernel: CPU features: detected: Spectre-v2 Oct 29 23:30:13.165968 kernel: CPU features: detected: Spectre-v3a Oct 29 23:30:13.165985 kernel: CPU features: detected: Spectre-BHB Oct 29 23:30:13.166002 kernel: CPU features: detected: ARM erratum 1742098 Oct 29 23:30:13.166019 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 29 23:30:13.166036 kernel: alternatives: applying boot alternatives Oct 29 23:30:13.166055 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1714a6d4d6c76fbe0af2166549be0df85ee0260f299bb3baeaf286f50f12863 Oct 29 23:30:13.166072 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 23:30:13.166089 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 23:30:13.166106 kernel: Fallback order for Node 0: 0 Oct 29 23:30:13.166123 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Oct 29 23:30:13.166139 kernel: Policy zone: Normal Oct 29 23:30:13.166160 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 23:30:13.166177 kernel: software IO TLB: area num 2. Oct 29 23:30:13.166193 kernel: software IO TLB: mapped [mem 0x000000006c5f0000-0x00000000705f0000] (64MB) Oct 29 23:30:13.166210 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 29 23:30:13.166226 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 23:30:13.166244 kernel: rcu: RCU event tracing is enabled. Oct 29 23:30:13.166261 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 29 23:30:13.166278 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 23:30:13.166295 kernel: Tracing variant of Tasks RCU enabled. Oct 29 23:30:13.166312 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 23:30:13.166329 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 29 23:30:13.166350 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 29 23:30:13.166367 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 29 23:30:13.166384 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 29 23:30:13.166401 kernel: GICv3: 96 SPIs implemented Oct 29 23:30:13.166418 kernel: GICv3: 0 Extended SPIs implemented Oct 29 23:30:13.166434 kernel: Root IRQ handler: gic_handle_irq Oct 29 23:30:13.166451 kernel: GICv3: GICv3 features: 16 PPIs Oct 29 23:30:13.166467 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Oct 29 23:30:13.166484 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 29 23:30:13.166500 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 29 23:30:13.166517 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Oct 29 23:30:13.166535 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Oct 29 23:30:13.166557 kernel: GICv3: using LPI property table @0x0000000400110000 Oct 29 23:30:13.166573 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 29 23:30:13.166590 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Oct 29 23:30:13.166606 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 29 23:30:13.166623 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 29 23:30:13.166640 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 29 23:30:13.168993 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 29 23:30:13.169016 kernel: Console: colour dummy device 80x25 Oct 29 23:30:13.169033 kernel: printk: legacy console [tty1] enabled Oct 29 23:30:13.169051 kernel: ACPI: Core revision 20240827 Oct 29 23:30:13.169077 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 29 23:30:13.169095 kernel: pid_max: default: 32768 minimum: 301 Oct 29 23:30:13.169112 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 29 23:30:13.169129 kernel: landlock: Up and running. Oct 29 23:30:13.169146 kernel: SELinux: Initializing. Oct 29 23:30:13.169163 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 23:30:13.169182 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 23:30:13.169199 kernel: rcu: Hierarchical SRCU implementation. Oct 29 23:30:13.169217 kernel: rcu: Max phase no-delay instances is 400. Oct 29 23:30:13.169240 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 29 23:30:13.169258 kernel: Remapping and enabling EFI services. Oct 29 23:30:13.169276 kernel: smp: Bringing up secondary CPUs ... Oct 29 23:30:13.169293 kernel: Detected PIPT I-cache on CPU1 Oct 29 23:30:13.169310 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 29 23:30:13.169327 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Oct 29 23:30:13.169345 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 29 23:30:13.169363 kernel: smp: Brought up 1 node, 2 CPUs Oct 29 23:30:13.169380 kernel: SMP: Total of 2 processors activated. Oct 29 23:30:13.169404 kernel: CPU: All CPU(s) started at EL1 Oct 29 23:30:13.169433 kernel: CPU features: detected: 32-bit EL0 Support Oct 29 23:30:13.169453 kernel: CPU features: detected: 32-bit EL1 Support Oct 29 23:30:13.169475 kernel: CPU features: detected: CRC32 instructions Oct 29 23:30:13.169494 kernel: alternatives: applying system-wide alternatives Oct 29 23:30:13.169515 kernel: Memory: 3796972K/4030464K available (11136K kernel code, 2450K rwdata, 9076K rodata, 38976K init, 1038K bss, 212148K reserved, 16384K cma-reserved) Oct 29 23:30:13.169533 kernel: devtmpfs: initialized Oct 29 23:30:13.169552 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 23:30:13.169575 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 29 23:30:13.169593 kernel: 17040 pages in range for non-PLT usage Oct 29 23:30:13.169611 kernel: 508560 pages in range for PLT usage Oct 29 23:30:13.169629 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 23:30:13.169689 kernel: SMBIOS 3.0.0 present. Oct 29 23:30:13.169719 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 29 23:30:13.169739 kernel: DMI: Memory slots populated: 0/0 Oct 29 23:30:13.169757 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 23:30:13.169775 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 29 23:30:13.169851 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 29 23:30:13.169870 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 29 23:30:13.169888 kernel: audit: initializing netlink subsys (disabled) Oct 29 23:30:13.169906 kernel: audit: type=2000 audit(0.258:1): state=initialized audit_enabled=0 res=1 Oct 29 23:30:13.169925 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 23:30:13.169942 kernel: cpuidle: using governor menu Oct 29 23:30:13.169960 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 29 23:30:13.169978 kernel: ASID allocator initialised with 65536 entries Oct 29 23:30:13.169995 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 23:30:13.170017 kernel: Serial: AMBA PL011 UART driver Oct 29 23:30:13.170034 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 23:30:13.170052 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 29 23:30:13.170069 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 29 23:30:13.170088 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 29 23:30:13.170106 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 23:30:13.170123 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 29 23:30:13.170141 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 29 23:30:13.170158 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 29 23:30:13.170180 kernel: ACPI: Added _OSI(Module Device) Oct 29 23:30:13.170198 kernel: ACPI: Added _OSI(Processor Device) Oct 29 23:30:13.170216 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 23:30:13.170234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 23:30:13.170251 kernel: ACPI: Interpreter enabled Oct 29 23:30:13.170277 kernel: ACPI: Using GIC for interrupt routing Oct 29 23:30:13.170294 kernel: ACPI: MCFG table detected, 1 entries Oct 29 23:30:13.170313 kernel: ACPI: CPU0 has been hot-added Oct 29 23:30:13.170331 kernel: ACPI: CPU1 has been hot-added Oct 29 23:30:13.170354 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 29 23:30:13.176906 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 23:30:13.177154 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 29 23:30:13.177335 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 29 23:30:13.177593 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 29 23:30:13.179886 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 29 23:30:13.179932 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 29 23:30:13.179961 kernel: acpiphp: Slot [1] registered Oct 29 23:30:13.179981 kernel: acpiphp: Slot [2] registered Oct 29 23:30:13.179999 kernel: acpiphp: Slot [3] registered Oct 29 23:30:13.180016 kernel: acpiphp: Slot [4] registered Oct 29 23:30:13.180034 kernel: acpiphp: Slot [5] registered Oct 29 23:30:13.180052 kernel: acpiphp: Slot [6] registered Oct 29 23:30:13.180070 kernel: acpiphp: Slot [7] registered Oct 29 23:30:13.180087 kernel: acpiphp: Slot [8] registered Oct 29 23:30:13.180105 kernel: acpiphp: Slot [9] registered Oct 29 23:30:13.180126 kernel: acpiphp: Slot [10] registered Oct 29 23:30:13.180144 kernel: acpiphp: Slot [11] registered Oct 29 23:30:13.180162 kernel: acpiphp: Slot [12] registered Oct 29 23:30:13.180179 kernel: acpiphp: Slot [13] registered Oct 29 23:30:13.180197 kernel: acpiphp: Slot [14] registered Oct 29 23:30:13.180214 kernel: acpiphp: Slot [15] registered Oct 29 23:30:13.180232 kernel: acpiphp: Slot [16] registered Oct 29 23:30:13.180250 kernel: acpiphp: Slot [17] registered Oct 29 23:30:13.180268 kernel: acpiphp: Slot [18] registered Oct 29 23:30:13.180285 kernel: acpiphp: Slot [19] registered Oct 29 23:30:13.180307 kernel: acpiphp: Slot [20] registered Oct 29 23:30:13.180324 kernel: acpiphp: Slot [21] registered Oct 29 23:30:13.180342 kernel: acpiphp: Slot [22] registered Oct 29 23:30:13.180359 kernel: acpiphp: Slot [23] registered Oct 29 23:30:13.180377 kernel: acpiphp: Slot [24] registered Oct 29 23:30:13.180418 kernel: acpiphp: Slot [25] registered Oct 29 23:30:13.180437 kernel: acpiphp: Slot [26] registered Oct 29 23:30:13.180455 kernel: acpiphp: Slot [27] registered Oct 29 23:30:13.180472 kernel: acpiphp: Slot [28] registered Oct 29 23:30:13.180495 kernel: acpiphp: Slot [29] registered Oct 29 23:30:13.180514 kernel: acpiphp: Slot [30] registered Oct 29 23:30:13.180531 kernel: acpiphp: Slot [31] registered Oct 29 23:30:13.180549 kernel: PCI host bridge to bus 0000:00 Oct 29 23:30:13.180807 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 29 23:30:13.180978 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 29 23:30:13.181143 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 29 23:30:13.181305 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 29 23:30:13.181523 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Oct 29 23:30:13.182812 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Oct 29 23:30:13.183043 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Oct 29 23:30:13.183253 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Oct 29 23:30:13.183445 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Oct 29 23:30:13.183633 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 29 23:30:13.183892 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Oct 29 23:30:13.184081 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Oct 29 23:30:13.184263 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Oct 29 23:30:13.184443 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Oct 29 23:30:13.184623 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 29 23:30:13.190927 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Oct 29 23:30:13.191162 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Oct 29 23:30:13.191365 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Oct 29 23:30:13.191564 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Oct 29 23:30:13.194683 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Oct 29 23:30:13.194893 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 29 23:30:13.195058 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 29 23:30:13.195222 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 29 23:30:13.195247 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 29 23:30:13.195274 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 29 23:30:13.195293 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 29 23:30:13.195311 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 29 23:30:13.195329 kernel: iommu: Default domain type: Translated Oct 29 23:30:13.195347 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 29 23:30:13.195364 kernel: efivars: Registered efivars operations Oct 29 23:30:13.195382 kernel: vgaarb: loaded Oct 29 23:30:13.195400 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 29 23:30:13.195417 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 23:30:13.195439 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 23:30:13.195457 kernel: pnp: PnP ACPI init Oct 29 23:30:13.195667 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 29 23:30:13.195696 kernel: pnp: PnP ACPI: found 1 devices Oct 29 23:30:13.195716 kernel: NET: Registered PF_INET protocol family Oct 29 23:30:13.195735 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 23:30:13.195753 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 23:30:13.195772 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 23:30:13.195790 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 23:30:13.195814 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 29 23:30:13.195832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 23:30:13.195850 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 23:30:13.195868 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 23:30:13.195886 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 23:30:13.195904 kernel: PCI: CLS 0 bytes, default 64 Oct 29 23:30:13.195922 kernel: kvm [1]: HYP mode not available Oct 29 23:30:13.195940 kernel: Initialise system trusted keyrings Oct 29 23:30:13.195959 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 23:30:13.195981 kernel: Key type asymmetric registered Oct 29 23:30:13.196000 kernel: Asymmetric key parser 'x509' registered Oct 29 23:30:13.196018 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 29 23:30:13.196038 kernel: io scheduler mq-deadline registered Oct 29 23:30:13.196057 kernel: io scheduler kyber registered Oct 29 23:30:13.196076 kernel: io scheduler bfq registered Oct 29 23:30:13.196315 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 29 23:30:13.196344 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 29 23:30:13.196371 kernel: ACPI: button: Power Button [PWRB] Oct 29 23:30:13.196390 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Oct 29 23:30:13.196408 kernel: ACPI: button: Sleep Button [SLPB] Oct 29 23:30:13.196427 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 23:30:13.196446 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 29 23:30:13.199727 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 29 23:30:13.199779 kernel: printk: legacy console [ttyS0] disabled Oct 29 23:30:13.199799 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 29 23:30:13.199818 kernel: printk: legacy console [ttyS0] enabled Oct 29 23:30:13.199846 kernel: printk: legacy bootconsole [uart0] disabled Oct 29 23:30:13.199864 kernel: thunder_xcv, ver 1.0 Oct 29 23:30:13.199882 kernel: thunder_bgx, ver 1.0 Oct 29 23:30:13.199899 kernel: nicpf, ver 1.0 Oct 29 23:30:13.199917 kernel: nicvf, ver 1.0 Oct 29 23:30:13.200170 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 29 23:30:13.200352 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-29T23:30:12 UTC (1761780612) Oct 29 23:30:13.200377 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 29 23:30:13.200402 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Oct 29 23:30:13.200424 kernel: NET: Registered PF_INET6 protocol family Oct 29 23:30:13.200442 kernel: watchdog: NMI not fully supported Oct 29 23:30:13.200459 kernel: watchdog: Hard watchdog permanently disabled Oct 29 23:30:13.200477 kernel: Segment Routing with IPv6 Oct 29 23:30:13.200495 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 23:30:13.200513 kernel: NET: Registered PF_PACKET protocol family Oct 29 23:30:13.200530 kernel: Key type dns_resolver registered Oct 29 23:30:13.200548 kernel: registered taskstats version 1 Oct 29 23:30:13.200570 kernel: Loading compiled-in X.509 certificates Oct 29 23:30:13.200589 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 7e3febc5e0a8b643b4690bc3ed5e79b236e1ccf8' Oct 29 23:30:13.200607 kernel: Demotion targets for Node 0: null Oct 29 23:30:13.200625 kernel: Key type .fscrypt registered Oct 29 23:30:13.200642 kernel: Key type fscrypt-provisioning registered Oct 29 23:30:13.200691 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 23:30:13.200711 kernel: ima: Allocated hash algorithm: sha1 Oct 29 23:30:13.200729 kernel: ima: No architecture policies found Oct 29 23:30:13.200746 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 29 23:30:13.200770 kernel: clk: Disabling unused clocks Oct 29 23:30:13.200788 kernel: PM: genpd: Disabling unused power domains Oct 29 23:30:13.200806 kernel: Warning: unable to open an initial console. Oct 29 23:30:13.200824 kernel: Freeing unused kernel memory: 38976K Oct 29 23:30:13.200842 kernel: Run /init as init process Oct 29 23:30:13.200860 kernel: with arguments: Oct 29 23:30:13.200878 kernel: /init Oct 29 23:30:13.200895 kernel: with environment: Oct 29 23:30:13.200912 kernel: HOME=/ Oct 29 23:30:13.200934 kernel: TERM=linux Oct 29 23:30:13.200954 systemd[1]: Successfully made /usr/ read-only. Oct 29 23:30:13.200978 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:30:13.200999 systemd[1]: Detected virtualization amazon. Oct 29 23:30:13.201017 systemd[1]: Detected architecture arm64. Oct 29 23:30:13.201036 systemd[1]: Running in initrd. Oct 29 23:30:13.201054 systemd[1]: No hostname configured, using default hostname. Oct 29 23:30:13.201079 systemd[1]: Hostname set to . Oct 29 23:30:13.201098 systemd[1]: Initializing machine ID from VM UUID. Oct 29 23:30:13.201117 systemd[1]: Queued start job for default target initrd.target. Oct 29 23:30:13.201136 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:30:13.201155 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:30:13.201176 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 29 23:30:13.201195 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:30:13.201215 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 29 23:30:13.201239 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 29 23:30:13.201261 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 29 23:30:13.201280 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 29 23:30:13.201299 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:30:13.201318 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:30:13.201337 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:30:13.201356 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:30:13.201379 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:30:13.201398 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:30:13.201417 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:30:13.201436 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:30:13.201455 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 29 23:30:13.201474 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 29 23:30:13.201493 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:30:13.201512 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:30:13.201531 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:30:13.201554 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:30:13.201574 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 29 23:30:13.201593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:30:13.201613 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 29 23:30:13.201632 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 29 23:30:13.202380 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 23:30:13.202410 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:30:13.202429 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:30:13.202457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:30:13.202477 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 29 23:30:13.202497 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:30:13.202516 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 23:30:13.202536 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 29 23:30:13.202560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:13.202580 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 29 23:30:13.202641 systemd-journald[256]: Collecting audit messages is disabled. Oct 29 23:30:13.202722 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 29 23:30:13.202749 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 23:30:13.202781 kernel: Bridge firewalling registered Oct 29 23:30:13.202806 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:30:13.202826 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:30:13.202846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:30:13.202866 systemd-journald[256]: Journal started Oct 29 23:30:13.202907 systemd-journald[256]: Runtime Journal (/run/log/journal/ec299a1eb257db4bb4907a570ad13e4c) is 8M, max 75.3M, 67.3M free. Oct 29 23:30:13.115861 systemd-modules-load[259]: Inserted module 'overlay' Oct 29 23:30:13.165497 systemd-modules-load[259]: Inserted module 'br_netfilter' Oct 29 23:30:13.213979 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:30:13.214455 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:30:13.225297 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:30:13.243880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:30:13.256104 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:30:13.262028 systemd-tmpfiles[286]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 29 23:30:13.265497 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 29 23:30:13.276303 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:30:13.287011 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:30:13.333875 dracut-cmdline[296]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1714a6d4d6c76fbe0af2166549be0df85ee0260f299bb3baeaf286f50f12863 Oct 29 23:30:13.391268 systemd-resolved[299]: Positive Trust Anchors: Oct 29 23:30:13.391295 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:30:13.391355 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:30:13.515688 kernel: SCSI subsystem initialized Oct 29 23:30:13.523685 kernel: Loading iSCSI transport class v2.0-870. Oct 29 23:30:13.535726 kernel: iscsi: registered transport (tcp) Oct 29 23:30:13.558119 kernel: iscsi: registered transport (qla4xxx) Oct 29 23:30:13.558201 kernel: QLogic iSCSI HBA Driver Oct 29 23:30:13.591891 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:30:13.617868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:30:13.633280 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:30:13.678731 kernel: random: crng init done Oct 29 23:30:13.678967 systemd-resolved[299]: Defaulting to hostname 'linux'. Oct 29 23:30:13.682018 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:30:13.683316 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:30:13.734846 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 29 23:30:13.742288 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 29 23:30:13.843696 kernel: raid6: neonx8 gen() 6465 MB/s Oct 29 23:30:13.860681 kernel: raid6: neonx4 gen() 6452 MB/s Oct 29 23:30:13.877682 kernel: raid6: neonx2 gen() 5361 MB/s Oct 29 23:30:13.894682 kernel: raid6: neonx1 gen() 3925 MB/s Oct 29 23:30:13.911681 kernel: raid6: int64x8 gen() 3640 MB/s Oct 29 23:30:13.928681 kernel: raid6: int64x4 gen() 3682 MB/s Oct 29 23:30:13.945683 kernel: raid6: int64x2 gen() 3562 MB/s Oct 29 23:30:13.963744 kernel: raid6: int64x1 gen() 2771 MB/s Oct 29 23:30:13.963791 kernel: raid6: using algorithm neonx8 gen() 6465 MB/s Oct 29 23:30:13.982697 kernel: raid6: .... xor() 4720 MB/s, rmw enabled Oct 29 23:30:13.982733 kernel: raid6: using neon recovery algorithm Oct 29 23:30:13.991375 kernel: xor: measuring software checksum speed Oct 29 23:30:13.991425 kernel: 8regs : 12934 MB/sec Oct 29 23:30:13.992577 kernel: 32regs : 13041 MB/sec Oct 29 23:30:13.994989 kernel: arm64_neon : 8554 MB/sec Oct 29 23:30:13.995023 kernel: xor: using function: 32regs (13041 MB/sec) Oct 29 23:30:14.085694 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 29 23:30:14.097312 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:30:14.103240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:30:14.158028 systemd-udevd[507]: Using default interface naming scheme 'v255'. Oct 29 23:30:14.170323 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:30:14.174747 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 29 23:30:14.222718 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Oct 29 23:30:14.266858 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:30:14.272006 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:30:14.398005 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:30:14.407150 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 29 23:30:14.576343 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 29 23:30:14.576415 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 29 23:30:14.581450 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 29 23:30:14.581512 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 29 23:30:14.585956 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 29 23:30:14.586268 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 29 23:30:14.586883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:30:14.587138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:14.593173 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:30:14.600404 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 29 23:30:14.613896 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 23:30:14.613956 kernel: GPT:9289727 != 33554431 Oct 29 23:30:14.613982 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 23:30:14.614007 kernel: GPT:9289727 != 33554431 Oct 29 23:30:14.606907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:30:14.625681 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 23:30:14.625734 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 29 23:30:14.625760 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d9:b1:b8:4d:d3 Oct 29 23:30:14.613538 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 29 23:30:14.634736 (udev-worker)[560]: Network interface NamePolicy= disabled on kernel command line. Oct 29 23:30:14.673070 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:14.682724 kernel: nvme nvme0: using unchecked data buffer Oct 29 23:30:14.803619 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 29 23:30:14.878111 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 29 23:30:14.884886 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 29 23:30:14.908532 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Oct 29 23:30:14.916546 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 29 23:30:14.956057 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 29 23:30:14.961833 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:30:14.965082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:30:14.970892 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:30:14.979553 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 29 23:30:14.984023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 29 23:30:15.014175 disk-uuid[692]: Primary Header is updated. Oct 29 23:30:15.014175 disk-uuid[692]: Secondary Entries is updated. Oct 29 23:30:15.014175 disk-uuid[692]: Secondary Header is updated. Oct 29 23:30:15.028733 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 29 23:30:15.037710 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:30:16.050750 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 29 23:30:16.053221 disk-uuid[694]: The operation has completed successfully. Oct 29 23:30:16.240238 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 23:30:16.240412 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 29 23:30:16.645638 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 29 23:30:16.681848 sh[960]: Success Oct 29 23:30:16.711053 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 23:30:16.711127 kernel: device-mapper: uevent: version 1.0.3 Oct 29 23:30:16.713139 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 29 23:30:16.725700 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Oct 29 23:30:16.830514 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 29 23:30:16.834759 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 29 23:30:16.853115 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 29 23:30:16.870672 kernel: BTRFS: device fsid fb1de99b-69c1-4598-af66-3a61dd29143e devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (983) Oct 29 23:30:16.874781 kernel: BTRFS info (device dm-0): first mount of filesystem fb1de99b-69c1-4598-af66-3a61dd29143e Oct 29 23:30:16.874839 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:30:17.021820 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 29 23:30:17.021886 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 29 23:30:17.023153 kernel: BTRFS info (device dm-0): enabling free space tree Oct 29 23:30:17.037381 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 29 23:30:17.037854 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:30:17.044672 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 29 23:30:17.045877 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 29 23:30:17.064482 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 29 23:30:17.110714 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1008) Oct 29 23:30:17.115250 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:30:17.115318 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:30:17.135751 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 29 23:30:17.135823 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 29 23:30:17.143904 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:30:17.145310 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 29 23:30:17.152098 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 29 23:30:17.244492 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:30:17.254856 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:30:17.322351 systemd-networkd[1152]: lo: Link UP Oct 29 23:30:17.322373 systemd-networkd[1152]: lo: Gained carrier Oct 29 23:30:17.326771 systemd-networkd[1152]: Enumeration completed Oct 29 23:30:17.327039 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:30:17.329020 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 29 23:30:17.329027 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 23:30:17.334736 systemd[1]: Reached target network.target - Network. Oct 29 23:30:17.352504 systemd-networkd[1152]: eth0: Link UP Oct 29 23:30:17.352517 systemd-networkd[1152]: eth0: Gained carrier Oct 29 23:30:17.352538 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 29 23:30:17.375718 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.30.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 29 23:30:17.741423 ignition[1078]: Ignition 2.22.0 Oct 29 23:30:17.741446 ignition[1078]: Stage: fetch-offline Oct 29 23:30:17.742327 ignition[1078]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:17.746472 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:30:17.742352 ignition[1078]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 29 23:30:17.752417 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 29 23:30:17.742845 ignition[1078]: Ignition finished successfully Oct 29 23:30:17.807820 ignition[1164]: Ignition 2.22.0 Oct 29 23:30:17.808330 ignition[1164]: Stage: fetch Oct 29 23:30:17.808869 ignition[1164]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:17.808893 ignition[1164]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 29 23:30:17.809017 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 29 23:30:17.832111 ignition[1164]: PUT result: OK Oct 29 23:30:17.836413 ignition[1164]: parsed url from cmdline: "" Oct 29 23:30:17.836430 ignition[1164]: no config URL provided Oct 29 23:30:17.836444 ignition[1164]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 23:30:17.836467 ignition[1164]: no config at "/usr/lib/ignition/user.ign" Oct 29 23:30:17.836513 ignition[1164]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 29 23:30:17.846821 ignition[1164]: PUT result: OK Oct 29 23:30:17.847080 ignition[1164]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 29 23:30:17.852162 ignition[1164]: GET result: OK Oct 29 23:30:17.852492 ignition[1164]: parsing config with SHA512: 2c879c9078bcd66bf800116fc4a8d931f10977595c50f6d2761e5828cd8f9adab6b09f909fa098c7189e21a56171c7d63e511cd83d8b14f2f88ffb4e842afc57 Oct 29 23:30:17.866888 unknown[1164]: fetched base config from "system" Oct 29 23:30:17.867541 ignition[1164]: fetch: fetch complete Oct 29 23:30:17.866907 unknown[1164]: fetched base config from "system" Oct 29 23:30:17.867552 ignition[1164]: fetch: fetch passed Oct 29 23:30:17.866932 unknown[1164]: fetched user config from "aws" Oct 29 23:30:17.867625 ignition[1164]: Ignition finished successfully Oct 29 23:30:17.873567 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 29 23:30:17.887907 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 29 23:30:17.940188 ignition[1171]: Ignition 2.22.0 Oct 29 23:30:17.940211 ignition[1171]: Stage: kargs Oct 29 23:30:17.941209 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:17.941232 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 29 23:30:17.941374 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 29 23:30:17.951887 ignition[1171]: PUT result: OK Oct 29 23:30:17.956637 ignition[1171]: kargs: kargs passed Oct 29 23:30:17.958516 ignition[1171]: Ignition finished successfully Oct 29 23:30:17.962759 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 29 23:30:17.970305 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 29 23:30:18.021391 ignition[1177]: Ignition 2.22.0 Oct 29 23:30:18.021983 ignition[1177]: Stage: disks Oct 29 23:30:18.023489 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:18.023516 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 29 23:30:18.023993 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 29 23:30:18.034527 ignition[1177]: PUT result: OK Oct 29 23:30:18.039462 ignition[1177]: disks: disks passed Oct 29 23:30:18.039832 ignition[1177]: Ignition finished successfully Oct 29 23:30:18.046591 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 29 23:30:18.047099 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 29 23:30:18.057724 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 29 23:30:18.060750 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:30:18.069006 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:30:18.072157 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:30:18.080741 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 29 23:30:18.140813 systemd-fsck[1185]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 29 23:30:18.145063 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 29 23:30:18.155331 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 29 23:30:18.278674 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b8ba1a5d-9c06-458f-b680-11cfeb802ce1 r/w with ordered data mode. Quota mode: none. Oct 29 23:30:18.279426 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 29 23:30:18.285359 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 29 23:30:18.290936 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:30:18.298192 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 29 23:30:18.310410 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 29 23:30:18.310519 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 23:30:18.310573 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:30:18.340293 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 29 23:30:18.348987 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1204) Oct 29 23:30:18.349033 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:30:18.349060 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:30:18.352954 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 29 23:30:18.364767 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 29 23:30:18.364818 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 29 23:30:18.365903 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:30:18.712947 initrd-setup-root[1228]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 23:30:18.746943 initrd-setup-root[1235]: cut: /sysroot/etc/group: No such file or directory Oct 29 23:30:18.754946 systemd-networkd[1152]: eth0: Gained IPv6LL Oct 29 23:30:18.785373 initrd-setup-root[1242]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 23:30:18.807203 initrd-setup-root[1249]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 23:30:19.122166 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 29 23:30:19.128693 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 29 23:30:19.135415 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 29 23:30:19.174242 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 29 23:30:19.180070 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:30:19.210985 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 29 23:30:19.238932 ignition[1316]: INFO : Ignition 2.22.0 Oct 29 23:30:19.238932 ignition[1316]: INFO : Stage: mount Oct 29 23:30:19.243547 ignition[1316]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:19.243547 ignition[1316]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 29 23:30:19.243547 ignition[1316]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 29 23:30:19.252684 ignition[1316]: INFO : PUT result: OK Oct 29 23:30:19.260783 ignition[1316]: INFO : mount: mount passed Oct 29 23:30:19.262685 ignition[1316]: INFO : Ignition finished successfully Oct 29 23:30:19.267816 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 29 23:30:19.274070 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 29 23:30:19.306250 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 29 23:30:19.360591 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1329) Oct 29 23:30:19.360705 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2aff5c98-43c2-4473-970e-0d2dedd7cca0 Oct 29 23:30:19.362671 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 29 23:30:19.369845 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 29 23:30:19.369942 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Oct 29 23:30:19.373526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 29 23:30:19.426358 ignition[1346]: INFO : Ignition 2.22.0 Oct 29 23:30:19.426358 ignition[1346]: INFO : Stage: files Oct 29 23:30:19.430719 ignition[1346]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:19.430719 ignition[1346]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 29 23:30:19.430719 ignition[1346]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 29 23:30:19.430719 ignition[1346]: INFO : PUT result: OK Oct 29 23:30:19.443738 ignition[1346]: DEBUG : files: compiled without relabeling support, skipping Oct 29 23:30:19.451129 ignition[1346]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 23:30:19.451129 ignition[1346]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 23:30:19.461697 ignition[1346]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 23:30:19.465617 ignition[1346]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 23:30:19.469480 unknown[1346]: wrote ssh authorized keys file for user: core Oct 29 23:30:19.472155 ignition[1346]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 23:30:19.477383 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 29 23:30:19.477383 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 29 23:30:19.601868 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 23:30:19.847111 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 29 23:30:19.852089 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 29 23:30:19.856810 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 23:30:19.861289 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:30:19.865890 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 23:30:19.870362 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:30:19.875518 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 23:30:19.875518 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:30:19.875518 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 23:30:19.889695 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:30:19.889695 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 23:30:19.889695 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 29 23:30:19.889695 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 29 23:30:19.889695 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 29 23:30:19.889695 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 29 23:30:20.215940 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 29 23:30:20.572310 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 29 23:30:20.572310 ignition[1346]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 29 23:30:20.589698 ignition[1346]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:30:20.597818 ignition[1346]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 23:30:20.597818 ignition[1346]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 29 23:30:20.597818 ignition[1346]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 29 23:30:20.597818 ignition[1346]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 23:30:20.597818 ignition[1346]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:30:20.597818 ignition[1346]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 23:30:20.597818 ignition[1346]: INFO : files: files passed Oct 29 23:30:20.597818 ignition[1346]: INFO : Ignition finished successfully Oct 29 23:30:20.613891 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 29 23:30:20.628545 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 29 23:30:20.642093 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 29 23:30:20.665221 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 23:30:20.668333 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 29 23:30:20.681586 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:30:20.681586 initrd-setup-root-after-ignition[1376]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:30:20.692550 initrd-setup-root-after-ignition[1380]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 23:30:20.698591 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:30:20.705938 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 29 23:30:20.710178 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 29 23:30:20.794401 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 23:30:20.794597 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 29 23:30:20.798072 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 29 23:30:20.807514 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 29 23:30:20.807747 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 29 23:30:20.820310 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 29 23:30:20.874741 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:30:20.883482 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 29 23:30:20.916820 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:30:20.922766 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:30:20.923118 systemd[1]: Stopped target timers.target - Timer Units. Oct 29 23:30:20.931230 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 23:30:20.931487 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 29 23:30:20.937252 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 29 23:30:20.940713 systemd[1]: Stopped target basic.target - Basic System. Oct 29 23:30:20.948112 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 29 23:30:20.951830 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 29 23:30:20.956980 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 29 23:30:20.962357 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 29 23:30:20.966046 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 29 23:30:20.973841 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 29 23:30:20.982824 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 29 23:30:20.991559 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 29 23:30:20.996172 systemd[1]: Stopped target swap.target - Swaps. Oct 29 23:30:21.002428 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 23:30:21.002933 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 29 23:30:21.010565 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:30:21.013319 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:30:21.021840 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 29 23:30:21.025815 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:30:21.029320 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 23:30:21.029572 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 29 23:30:21.040631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 23:30:21.041440 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 29 23:30:21.047184 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 23:30:21.047491 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 29 23:30:21.059439 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 29 23:30:21.063211 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 23:30:21.063524 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:30:21.074730 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 29 23:30:21.082099 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 23:30:21.082762 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:30:21.095862 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 23:30:21.096260 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 29 23:30:21.114235 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 23:30:21.114462 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 29 23:30:21.143126 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 23:30:21.156157 ignition[1400]: INFO : Ignition 2.22.0 Oct 29 23:30:21.158812 ignition[1400]: INFO : Stage: umount Oct 29 23:30:21.161145 ignition[1400]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 23:30:21.163994 ignition[1400]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 29 23:30:21.163994 ignition[1400]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 29 23:30:21.172255 ignition[1400]: INFO : PUT result: OK Oct 29 23:30:21.176627 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 23:30:21.176993 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 29 23:30:21.182345 ignition[1400]: INFO : umount: umount passed Oct 29 23:30:21.182345 ignition[1400]: INFO : Ignition finished successfully Oct 29 23:30:21.190051 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 23:30:21.190256 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 29 23:30:21.194992 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 23:30:21.195169 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 29 23:30:21.199013 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 23:30:21.199121 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 29 23:30:21.206215 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 29 23:30:21.206305 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 29 23:30:21.209364 systemd[1]: Stopped target network.target - Network. Oct 29 23:30:21.216333 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 23:30:21.216425 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 29 23:30:21.219849 systemd[1]: Stopped target paths.target - Path Units. Oct 29 23:30:21.226897 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 23:30:21.244055 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:30:21.246977 systemd[1]: Stopped target slices.target - Slice Units. Oct 29 23:30:21.249208 systemd[1]: Stopped target sockets.target - Socket Units. Oct 29 23:30:21.257498 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 23:30:21.257578 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 29 23:30:21.260628 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 23:30:21.260722 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 29 23:30:21.267586 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 23:30:21.267712 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 29 23:30:21.270586 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 29 23:30:21.270696 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 29 23:30:21.277782 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 23:30:21.277873 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 29 23:30:21.281149 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 29 23:30:21.288330 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 29 23:30:21.315785 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 23:30:21.316132 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 29 23:30:21.326794 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 29 23:30:21.327231 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 23:30:21.327420 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 29 23:30:21.334868 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 29 23:30:21.336370 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 29 23:30:21.336942 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 23:30:21.337026 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:30:21.349678 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 29 23:30:21.364055 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 23:30:21.364322 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 29 23:30:21.373722 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 23:30:21.373834 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:30:21.381987 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 23:30:21.382082 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 29 23:30:21.384769 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 29 23:30:21.384846 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:30:21.400520 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:30:21.419878 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 29 23:30:21.420185 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 29 23:30:21.438309 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 23:30:21.438870 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:30:21.448024 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 23:30:21.448161 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 29 23:30:21.451528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 23:30:21.451602 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:30:21.459614 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 23:30:21.459738 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 29 23:30:21.473394 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 23:30:21.473492 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 29 23:30:21.481819 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 23:30:21.482414 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 29 23:30:21.492969 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 29 23:30:21.495886 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 29 23:30:21.496026 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:30:21.510048 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 23:30:21.510171 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:30:21.520190 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 23:30:21.520303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:21.528252 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 29 23:30:21.528870 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 29 23:30:21.528956 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 29 23:30:21.529595 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 23:30:21.542297 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 29 23:30:21.560180 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 23:30:21.560546 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 29 23:30:21.569206 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 29 23:30:21.574363 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 29 23:30:21.613057 systemd[1]: Switching root. Oct 29 23:30:21.669758 systemd-journald[256]: Journal stopped Oct 29 23:30:24.133883 systemd-journald[256]: Received SIGTERM from PID 1 (systemd). Oct 29 23:30:24.134014 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 23:30:24.134055 kernel: SELinux: policy capability open_perms=1 Oct 29 23:30:24.134086 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 23:30:24.134115 kernel: SELinux: policy capability always_check_network=0 Oct 29 23:30:24.134146 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 23:30:24.134181 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 23:30:24.134221 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 23:30:24.134248 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 23:30:24.134276 kernel: SELinux: policy capability userspace_initial_context=0 Oct 29 23:30:24.134305 kernel: audit: type=1403 audit(1761780622.130:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 23:30:24.134333 systemd[1]: Successfully loaded SELinux policy in 97.840ms. Oct 29 23:30:24.134375 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.217ms. Oct 29 23:30:24.134407 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 29 23:30:24.134436 systemd[1]: Detected virtualization amazon. Oct 29 23:30:24.134465 systemd[1]: Detected architecture arm64. Oct 29 23:30:24.134495 systemd[1]: Detected first boot. Oct 29 23:30:24.134525 systemd[1]: Initializing machine ID from VM UUID. Oct 29 23:30:24.134554 kernel: NET: Registered PF_VSOCK protocol family Oct 29 23:30:24.134583 zram_generator::config[1444]: No configuration found. Oct 29 23:30:24.134617 systemd[1]: Populated /etc with preset unit settings. Oct 29 23:30:24.134677 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 29 23:30:24.134714 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 23:30:24.134744 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 29 23:30:24.134778 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 23:30:24.134809 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 29 23:30:24.134836 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 29 23:30:24.134863 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 29 23:30:24.134894 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 29 23:30:24.134925 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 29 23:30:24.134954 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 29 23:30:24.134985 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 29 23:30:24.135014 systemd[1]: Created slice user.slice - User and Session Slice. Oct 29 23:30:24.135046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 29 23:30:24.135076 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 29 23:30:24.135103 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 29 23:30:24.135149 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 29 23:30:24.135189 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 29 23:30:24.135221 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 29 23:30:24.135250 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 29 23:30:24.135280 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 29 23:30:24.135320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 29 23:30:24.135347 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 29 23:30:24.135376 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 29 23:30:24.135408 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 29 23:30:24.135438 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 29 23:30:24.135467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 29 23:30:24.135495 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 29 23:30:24.135521 systemd[1]: Reached target slices.target - Slice Units. Oct 29 23:30:24.135550 systemd[1]: Reached target swap.target - Swaps. Oct 29 23:30:24.135585 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 29 23:30:24.135612 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 29 23:30:24.135642 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 29 23:30:24.135729 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 29 23:30:24.135759 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 29 23:30:24.135786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 29 23:30:24.135816 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 29 23:30:24.135842 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 29 23:30:24.135869 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 29 23:30:24.135902 systemd[1]: Mounting media.mount - External Media Directory... Oct 29 23:30:24.135929 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 29 23:30:24.135956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 29 23:30:24.135982 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 29 23:30:24.136010 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 23:30:24.136038 systemd[1]: Reached target machines.target - Containers. Oct 29 23:30:24.136065 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 29 23:30:24.136103 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:30:24.136135 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 29 23:30:24.136164 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 29 23:30:24.136193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:30:24.136220 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:30:24.136250 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:30:24.136277 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 29 23:30:24.136304 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:30:24.136342 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 23:30:24.136374 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 23:30:24.136401 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 29 23:30:24.136428 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 23:30:24.136454 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 23:30:24.136484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:30:24.136512 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 29 23:30:24.136543 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 29 23:30:24.136571 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 29 23:30:24.136598 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 29 23:30:24.136625 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 29 23:30:24.136684 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 29 23:30:24.136718 systemd[1]: verity-setup.service: Deactivated successfully. Oct 29 23:30:24.136746 systemd[1]: Stopped verity-setup.service. Oct 29 23:30:24.136774 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 29 23:30:24.136828 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 29 23:30:24.145758 systemd[1]: Mounted media.mount - External Media Directory. Oct 29 23:30:24.145811 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 29 23:30:24.145841 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 29 23:30:24.145870 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 29 23:30:24.145906 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 29 23:30:24.145937 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 23:30:24.145976 kernel: loop: module loaded Oct 29 23:30:24.146006 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 29 23:30:24.146037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:30:24.146064 kernel: fuse: init (API version 7.41) Oct 29 23:30:24.146093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:30:24.146124 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:30:24.146154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:30:24.146186 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 23:30:24.146215 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 29 23:30:24.146243 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:30:24.146273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:30:24.146301 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 29 23:30:24.146331 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 29 23:30:24.146360 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 29 23:30:24.146389 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 29 23:30:24.146419 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 23:30:24.146453 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 29 23:30:24.146485 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 29 23:30:24.146580 systemd-journald[1523]: Collecting audit messages is disabled. Oct 29 23:30:24.146632 systemd-journald[1523]: Journal started Oct 29 23:30:24.146776 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec299a1eb257db4bb4907a570ad13e4c) is 8M, max 75.3M, 67.3M free. Oct 29 23:30:23.449477 systemd[1]: Queued start job for default target multi-user.target. Oct 29 23:30:23.474486 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 29 23:30:23.475323 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 23:30:24.155924 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 29 23:30:24.161344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:30:24.183372 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 29 23:30:24.183454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:30:24.191714 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 29 23:30:24.199705 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:30:24.209369 kernel: ACPI: bus type drm_connector registered Oct 29 23:30:24.211369 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 29 23:30:24.220285 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 29 23:30:24.233673 systemd[1]: Started systemd-journald.service - Journal Service. Oct 29 23:30:24.235143 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:30:24.237143 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:30:24.242091 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 29 23:30:24.247955 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 29 23:30:24.251753 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 29 23:30:24.255205 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 29 23:30:24.301999 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 29 23:30:24.320688 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 29 23:30:24.324223 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 29 23:30:24.334461 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 29 23:30:24.346492 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 29 23:30:24.376378 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec299a1eb257db4bb4907a570ad13e4c is 45.407ms for 930 entries. Oct 29 23:30:24.376378 systemd-journald[1523]: System Journal (/var/log/journal/ec299a1eb257db4bb4907a570ad13e4c) is 8M, max 195.6M, 187.6M free. Oct 29 23:30:24.450395 kernel: loop0: detected capacity change from 0 to 61264 Oct 29 23:30:24.450472 systemd-journald[1523]: Received client request to flush runtime journal. Oct 29 23:30:24.386807 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 29 23:30:24.409795 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 29 23:30:24.418165 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 29 23:30:24.422097 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 29 23:30:24.458743 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 29 23:30:24.479453 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 23:30:24.490903 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 23:30:24.529843 kernel: loop1: detected capacity change from 0 to 119368 Oct 29 23:30:24.545700 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 29 23:30:24.576748 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 29 23:30:24.582051 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 29 23:30:24.644343 kernel: loop2: detected capacity change from 0 to 207008 Oct 29 23:30:24.665095 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Oct 29 23:30:24.665126 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Oct 29 23:30:24.676067 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 29 23:30:24.757693 kernel: loop3: detected capacity change from 0 to 100632 Oct 29 23:30:24.866745 kernel: loop4: detected capacity change from 0 to 61264 Oct 29 23:30:24.883748 kernel: loop5: detected capacity change from 0 to 119368 Oct 29 23:30:24.904787 kernel: loop6: detected capacity change from 0 to 207008 Oct 29 23:30:24.931725 kernel: loop7: detected capacity change from 0 to 100632 Oct 29 23:30:24.942743 (sd-merge)[1602]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Oct 29 23:30:24.944204 (sd-merge)[1602]: Merged extensions into '/usr'. Oct 29 23:30:24.952502 systemd[1]: Reload requested from client PID 1552 ('systemd-sysext') (unit systemd-sysext.service)... Oct 29 23:30:24.952724 systemd[1]: Reloading... Oct 29 23:30:25.118930 zram_generator::config[1631]: No configuration found. Oct 29 23:30:25.568599 systemd[1]: Reloading finished in 615 ms. Oct 29 23:30:25.595747 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 29 23:30:25.599864 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 29 23:30:25.616908 systemd[1]: Starting ensure-sysext.service... Oct 29 23:30:25.622177 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 29 23:30:25.632955 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 29 23:30:25.670445 systemd[1]: Reload requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Oct 29 23:30:25.670467 systemd[1]: Reloading... Oct 29 23:30:25.686268 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 29 23:30:25.686339 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 29 23:30:25.687524 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 23:30:25.691795 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 29 23:30:25.693578 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 23:30:25.696327 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Oct 29 23:30:25.696503 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Oct 29 23:30:25.714084 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:30:25.714110 systemd-tmpfiles[1681]: Skipping /boot Oct 29 23:30:25.759861 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Oct 29 23:30:25.759891 systemd-tmpfiles[1681]: Skipping /boot Oct 29 23:30:25.808499 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Oct 29 23:30:25.846687 zram_generator::config[1713]: No configuration found. Oct 29 23:30:25.943687 ldconfig[1545]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 23:30:26.207732 (udev-worker)[1743]: Network interface NamePolicy= disabled on kernel command line. Oct 29 23:30:26.359099 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 29 23:30:26.360226 systemd[1]: Reloading finished in 689 ms. Oct 29 23:30:26.374168 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 29 23:30:26.380929 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 29 23:30:26.401990 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 29 23:30:26.441462 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:30:26.452062 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 29 23:30:26.458073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:30:26.462270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:30:26.472278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:30:26.499195 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:30:26.505338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:30:26.505617 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:30:26.511464 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 29 23:30:26.527999 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 29 23:30:26.540095 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 29 23:30:26.550256 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 29 23:30:26.559761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:30:26.560148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:30:26.567598 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:30:26.568384 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:30:26.576279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:30:26.577897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:30:26.594776 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 29 23:30:26.617221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 29 23:30:26.630159 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 29 23:30:26.648584 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 29 23:30:26.658345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 29 23:30:26.662617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 29 23:30:26.662902 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 29 23:30:26.663303 systemd[1]: Reached target time-set.target - System Time Set. Oct 29 23:30:26.682040 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 29 23:30:26.690788 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 23:30:26.691219 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 29 23:30:26.706360 systemd[1]: Finished ensure-sysext.service. Oct 29 23:30:26.733372 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 23:30:26.734905 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 29 23:30:26.741113 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 23:30:26.741480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 29 23:30:26.750810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 23:30:26.751562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 29 23:30:26.778249 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 29 23:30:26.784843 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 23:30:26.784969 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 29 23:30:26.788593 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 29 23:30:26.803007 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 29 23:30:26.845259 augenrules[1864]: No rules Oct 29 23:30:26.851039 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:30:26.851522 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:30:26.888885 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 29 23:30:26.924627 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 29 23:30:26.939133 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 23:30:27.151277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 29 23:30:27.234012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 29 23:30:27.245105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 29 23:30:27.250329 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 29 23:30:27.328634 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 29 23:30:27.356738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 29 23:30:27.420121 systemd-networkd[1811]: lo: Link UP Oct 29 23:30:27.420145 systemd-networkd[1811]: lo: Gained carrier Oct 29 23:30:27.422866 systemd-networkd[1811]: Enumeration completed Oct 29 23:30:27.423874 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 29 23:30:27.427758 systemd-networkd[1811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 29 23:30:27.427781 systemd-networkd[1811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 23:30:27.432121 systemd-networkd[1811]: eth0: Link UP Oct 29 23:30:27.432531 systemd-networkd[1811]: eth0: Gained carrier Oct 29 23:30:27.432578 systemd-networkd[1811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 29 23:30:27.434152 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 29 23:30:27.439870 systemd-resolved[1814]: Positive Trust Anchors: Oct 29 23:30:27.440333 systemd-resolved[1814]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 23:30:27.440402 systemd-resolved[1814]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 29 23:30:27.444028 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 29 23:30:27.453833 systemd-networkd[1811]: eth0: DHCPv4 address 172.31.30.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 29 23:30:27.460636 systemd-resolved[1814]: Defaulting to hostname 'linux'. Oct 29 23:30:27.464262 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 29 23:30:27.471885 systemd[1]: Reached target network.target - Network. Oct 29 23:30:27.474594 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 29 23:30:27.477729 systemd[1]: Reached target sysinit.target - System Initialization. Oct 29 23:30:27.481001 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 29 23:30:27.484889 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 29 23:30:27.490307 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 29 23:30:27.493572 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 29 23:30:27.497031 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 29 23:30:27.500365 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 23:30:27.500524 systemd[1]: Reached target paths.target - Path Units. Oct 29 23:30:27.503804 systemd[1]: Reached target timers.target - Timer Units. Oct 29 23:30:27.508096 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 29 23:30:27.513079 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 29 23:30:27.519540 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 29 23:30:27.523101 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 29 23:30:27.526768 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 29 23:30:27.538636 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 29 23:30:27.541937 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 29 23:30:27.548587 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 29 23:30:27.552264 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 29 23:30:27.555833 systemd[1]: Reached target sockets.target - Socket Units. Oct 29 23:30:27.558827 systemd[1]: Reached target basic.target - Basic System. Oct 29 23:30:27.561685 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:30:27.561911 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 29 23:30:27.565882 systemd[1]: Starting containerd.service - containerd container runtime... Oct 29 23:30:27.575128 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 29 23:30:27.582330 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 29 23:30:27.595351 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 29 23:30:27.604025 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 29 23:30:27.614843 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 29 23:30:27.619804 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 29 23:30:27.625200 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 29 23:30:27.637867 systemd[1]: Started ntpd.service - Network Time Service. Oct 29 23:30:27.650935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 29 23:30:27.664887 jq[1975]: false Oct 29 23:30:27.662076 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 29 23:30:27.671982 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 29 23:30:27.681591 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 29 23:30:27.696977 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 29 23:30:27.701796 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 23:30:27.702674 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 23:30:27.706756 systemd[1]: Starting update-engine.service - Update Engine... Oct 29 23:30:27.724673 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 29 23:30:27.731881 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 29 23:30:27.736310 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 23:30:27.736811 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 29 23:30:27.776970 extend-filesystems[1976]: Found /dev/nvme0n1p6 Oct 29 23:30:27.781045 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 23:30:27.794517 extend-filesystems[1976]: Found /dev/nvme0n1p9 Oct 29 23:30:27.821098 extend-filesystems[1976]: Checking size of /dev/nvme0n1p9 Oct 29 23:30:27.831602 jq[1991]: true Oct 29 23:30:27.838854 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 29 23:30:27.848474 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 23:30:27.851789 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 29 23:30:27.893230 extend-filesystems[1976]: Resized partition /dev/nvme0n1p9 Oct 29 23:30:27.899261 tar[1999]: linux-arm64/LICENSE Oct 29 23:30:27.899741 tar[1999]: linux-arm64/helm Oct 29 23:30:27.908130 dbus-daemon[1973]: [system] SELinux support is enabled Oct 29 23:30:27.908430 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 29 23:30:27.916279 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 23:30:27.916359 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 29 23:30:27.926066 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 23:30:27.926104 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 29 23:30:27.934392 dbus-daemon[1973]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1811 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 29 23:30:27.938579 update_engine[1990]: I20251029 23:30:27.938122 1990 main.cc:92] Flatcar Update Engine starting Oct 29 23:30:27.952492 dbus-daemon[1973]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 29 23:30:27.956369 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 29 23:30:27.960066 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 29 23:30:27.965850 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:32:15 UTC 2025 (1): Starting Oct 29 23:30:27.965850 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 29 23:30:27.965850 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: ---------------------------------------------------- Oct 29 23:30:27.965850 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: ntp-4 is maintained by Network Time Foundation, Oct 29 23:30:27.965850 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 29 23:30:27.965850 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: corporation. Support and training for ntp-4 are Oct 29 23:30:27.965850 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: available at https://www.nwtime.org/support Oct 29 23:30:27.965850 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: ---------------------------------------------------- Oct 29 23:30:27.963095 ntpd[1978]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:32:15 UTC 2025 (1): Starting Oct 29 23:30:27.963191 ntpd[1978]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 29 23:30:27.971110 extend-filesystems[2026]: resize2fs 1.47.3 (8-Jul-2025) Oct 29 23:30:27.963210 ntpd[1978]: ---------------------------------------------------- Oct 29 23:30:27.963227 ntpd[1978]: ntp-4 is maintained by Network Time Foundation, Oct 29 23:30:27.963242 ntpd[1978]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 29 23:30:27.963259 ntpd[1978]: corporation. Support and training for ntp-4 are Oct 29 23:30:27.963275 ntpd[1978]: available at https://www.nwtime.org/support Oct 29 23:30:27.963291 ntpd[1978]: ---------------------------------------------------- Oct 29 23:30:27.985747 ntpd[1978]: proto: precision = 0.096 usec (-23) Oct 29 23:30:27.986964 systemd[1]: Started update-engine.service - Update Engine. Oct 29 23:30:27.990846 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: proto: precision = 0.096 usec (-23) Oct 29 23:30:27.996526 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Oct 29 23:30:27.994073 ntpd[1978]: basedate set to 2025-10-17 Oct 29 23:30:27.996741 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: basedate set to 2025-10-17 Oct 29 23:30:27.996741 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: gps base set to 2025-10-19 (week 2389) Oct 29 23:30:27.996741 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: Listen and drop on 0 v6wildcard [::]:123 Oct 29 23:30:27.996741 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 29 23:30:27.996741 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: Listen normally on 2 lo 127.0.0.1:123 Oct 29 23:30:27.996741 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: Listen normally on 3 eth0 172.31.30.28:123 Oct 29 23:30:27.996741 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: Listen normally on 4 lo [::1]:123 Oct 29 23:30:27.996741 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: bind(21) AF_INET6 [fe80::4d9:b1ff:feb8:4dd3%2]:123 flags 0x811 failed: Cannot assign requested address Oct 29 23:30:27.996741 ntpd[1978]: 29 Oct 23:30:27 ntpd[1978]: unable to create socket on eth0 (5) for [fe80::4d9:b1ff:feb8:4dd3%2]:123 Oct 29 23:30:28.009438 update_engine[1990]: I20251029 23:30:27.993970 1990 update_check_scheduler.cc:74] Next update check in 4m36s Oct 29 23:30:27.994102 ntpd[1978]: gps base set to 2025-10-19 (week 2389) Oct 29 23:30:28.005966 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 29 23:30:27.994323 ntpd[1978]: Listen and drop on 0 v6wildcard [::]:123 Oct 29 23:30:28.018815 jq[2014]: true Oct 29 23:30:28.016082 systemd-coredump[2028]: Process 1978 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Oct 29 23:30:27.994377 ntpd[1978]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 29 23:30:27.994805 ntpd[1978]: Listen normally on 2 lo 127.0.0.1:123 Oct 29 23:30:28.028517 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Oct 29 23:30:27.994865 ntpd[1978]: Listen normally on 3 eth0 172.31.30.28:123 Oct 29 23:30:27.994916 ntpd[1978]: Listen normally on 4 lo [::1]:123 Oct 29 23:30:27.994969 ntpd[1978]: bind(21) AF_INET6 [fe80::4d9:b1ff:feb8:4dd3%2]:123 flags 0x811 failed: Cannot assign requested address Oct 29 23:30:27.995012 ntpd[1978]: unable to create socket on eth0 (5) for [fe80::4d9:b1ff:feb8:4dd3%2]:123 Oct 29 23:30:28.043898 systemd[1]: Started systemd-coredump@0-2028-0.service - Process Core Dump (PID 2028/UID 0). Oct 29 23:30:28.081298 coreos-metadata[1972]: Oct 29 23:30:28.080 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 29 23:30:28.092898 coreos-metadata[1972]: Oct 29 23:30:28.092 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 29 23:30:28.097953 coreos-metadata[1972]: Oct 29 23:30:28.097 INFO Fetch successful Oct 29 23:30:28.097953 coreos-metadata[1972]: Oct 29 23:30:28.097 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 29 23:30:28.105834 coreos-metadata[1972]: Oct 29 23:30:28.103 INFO Fetch successful Oct 29 23:30:28.105834 coreos-metadata[1972]: Oct 29 23:30:28.103 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 29 23:30:28.106968 coreos-metadata[1972]: Oct 29 23:30:28.106 INFO Fetch successful Oct 29 23:30:28.106968 coreos-metadata[1972]: Oct 29 23:30:28.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 29 23:30:28.114120 coreos-metadata[1972]: Oct 29 23:30:28.111 INFO Fetch successful Oct 29 23:30:28.114120 coreos-metadata[1972]: Oct 29 23:30:28.111 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 29 23:30:28.115188 coreos-metadata[1972]: Oct 29 23:30:28.114 INFO Fetch failed with 404: resource not found Oct 29 23:30:28.115188 coreos-metadata[1972]: Oct 29 23:30:28.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 29 23:30:28.121007 coreos-metadata[1972]: Oct 29 23:30:28.120 INFO Fetch successful Oct 29 23:30:28.121007 coreos-metadata[1972]: Oct 29 23:30:28.120 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 29 23:30:28.122178 coreos-metadata[1972]: Oct 29 23:30:28.121 INFO Fetch successful Oct 29 23:30:28.122178 coreos-metadata[1972]: Oct 29 23:30:28.121 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 29 23:30:28.124287 coreos-metadata[1972]: Oct 29 23:30:28.124 INFO Fetch successful Oct 29 23:30:28.124287 coreos-metadata[1972]: Oct 29 23:30:28.124 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 29 23:30:28.127823 coreos-metadata[1972]: Oct 29 23:30:28.127 INFO Fetch successful Oct 29 23:30:28.127823 coreos-metadata[1972]: Oct 29 23:30:28.127 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 29 23:30:28.130372 coreos-metadata[1972]: Oct 29 23:30:28.129 INFO Fetch successful Oct 29 23:30:28.177728 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 29 23:30:28.192706 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Oct 29 23:30:28.215410 extend-filesystems[2026]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 29 23:30:28.215410 extend-filesystems[2026]: old_desc_blocks = 1, new_desc_blocks = 2 Oct 29 23:30:28.215410 extend-filesystems[2026]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Oct 29 23:30:28.230932 extend-filesystems[1976]: Resized filesystem in /dev/nvme0n1p9 Oct 29 23:30:28.230323 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 23:30:28.230969 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 29 23:30:28.278615 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 29 23:30:28.405623 systemd-logind[1985]: Watching system buttons on /dev/input/event0 (Power Button) Oct 29 23:30:28.407694 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 29 23:30:28.421073 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 29 23:30:28.422797 systemd-logind[1985]: Watching system buttons on /dev/input/event1 (Sleep Button) Oct 29 23:30:28.423235 systemd-logind[1985]: New seat seat0. Oct 29 23:30:28.429487 systemd[1]: Started systemd-logind.service - User Login Management. Oct 29 23:30:28.457951 bash[2068]: Updated "/home/core/.ssh/authorized_keys" Oct 29 23:30:28.464741 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 29 23:30:28.494738 systemd[1]: Starting sshkeys.service... Oct 29 23:30:28.645326 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 29 23:30:28.652285 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 29 23:30:28.746542 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 29 23:30:28.776450 dbus-daemon[1973]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 29 23:30:28.788537 dbus-daemon[1973]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2027 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 29 23:30:28.819458 systemd[1]: Starting polkit.service - Authorization Manager... Oct 29 23:30:28.990679 containerd[2015]: time="2025-10-29T23:30:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 29 23:30:28.993873 systemd-networkd[1811]: eth0: Gained IPv6LL Oct 29 23:30:29.008132 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 29 23:30:29.019942 containerd[2015]: time="2025-10-29T23:30:29.018629566Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 29 23:30:29.020258 systemd[1]: Reached target network-online.target - Network is Online. Oct 29 23:30:29.032532 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 29 23:30:29.045372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:30:29.058384 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 29 23:30:29.075402 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 23:30:29.187053 containerd[2015]: time="2025-10-29T23:30:29.186996370Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.608µs" Oct 29 23:30:29.187239 containerd[2015]: time="2025-10-29T23:30:29.187205626Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 29 23:30:29.187452 containerd[2015]: time="2025-10-29T23:30:29.187410034Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 29 23:30:29.190501 containerd[2015]: time="2025-10-29T23:30:29.190434382Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 29 23:30:29.191724 containerd[2015]: time="2025-10-29T23:30:29.191683558Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 29 23:30:29.195458 containerd[2015]: time="2025-10-29T23:30:29.192048394Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:30:29.195458 containerd[2015]: time="2025-10-29T23:30:29.192215830Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 29 23:30:29.195458 containerd[2015]: time="2025-10-29T23:30:29.192242710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:30:29.195458 containerd[2015]: time="2025-10-29T23:30:29.192608842Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.199002886Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.199096318Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.199123594Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.199347082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.199854910Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.199940302Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.199965826Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.200033566Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.200451850Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 29 23:30:29.200695 containerd[2015]: time="2025-10-29T23:30:29.200609794Z" level=info msg="metadata content store policy set" policy=shared Oct 29 23:30:29.219829 containerd[2015]: time="2025-10-29T23:30:29.219723899Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.219990887Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220135091Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220168979Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220198967Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220237259Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220266695Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220296011Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220323239Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220350491Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220379771Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 29 23:30:29.220677 containerd[2015]: time="2025-10-29T23:30:29.220415447Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.221608403Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.221716955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.221756219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.221784395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.221812715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.225466871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.225531239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.225558731Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.225602483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.225635699Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.225921275Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.226304975Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.226345403Z" level=info msg="Start snapshots syncer" Oct 29 23:30:29.227674 containerd[2015]: time="2025-10-29T23:30:29.226389203Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 29 23:30:29.233928 containerd[2015]: time="2025-10-29T23:30:29.233842271Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 29 23:30:29.234543 containerd[2015]: time="2025-10-29T23:30:29.234247559Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 29 23:30:29.239675 containerd[2015]: time="2025-10-29T23:30:29.237627923Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 29 23:30:29.239675 containerd[2015]: time="2025-10-29T23:30:29.238029863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 29 23:30:29.239675 containerd[2015]: time="2025-10-29T23:30:29.238079435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 29 23:30:29.239675 containerd[2015]: time="2025-10-29T23:30:29.238111391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 29 23:30:29.239675 containerd[2015]: time="2025-10-29T23:30:29.238139351Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 29 23:30:29.239675 containerd[2015]: time="2025-10-29T23:30:29.238169867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.238200479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243149195Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243228323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243260351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243290207Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243370103Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243403559Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243526763Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243554447Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243575555Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243601511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 29 23:30:29.244097 containerd[2015]: time="2025-10-29T23:30:29.243628091Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 29 23:30:29.244897 containerd[2015]: time="2025-10-29T23:30:29.244857251Z" level=info msg="runtime interface created" Oct 29 23:30:29.250377 containerd[2015]: time="2025-10-29T23:30:29.245191763Z" level=info msg="created NRI interface" Oct 29 23:30:29.250377 containerd[2015]: time="2025-10-29T23:30:29.245231675Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 29 23:30:29.250377 containerd[2015]: time="2025-10-29T23:30:29.245273183Z" level=info msg="Connect containerd service" Oct 29 23:30:29.250377 containerd[2015]: time="2025-10-29T23:30:29.245359091Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 29 23:30:29.257705 containerd[2015]: time="2025-10-29T23:30:29.254803787Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 23:30:29.262738 coreos-metadata[2139]: Oct 29 23:30:29.262 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 29 23:30:29.267877 coreos-metadata[2139]: Oct 29 23:30:29.265 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 29 23:30:29.267877 coreos-metadata[2139]: Oct 29 23:30:29.267 INFO Fetch successful Oct 29 23:30:29.267877 coreos-metadata[2139]: Oct 29 23:30:29.267 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 29 23:30:29.271675 coreos-metadata[2139]: Oct 29 23:30:29.269 INFO Fetch successful Oct 29 23:30:29.276200 unknown[2139]: wrote ssh authorized keys file for user: core Oct 29 23:30:29.289842 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 29 23:30:29.321967 systemd-coredump[2032]: Process 1978 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1978: #0 0x0000aaaad66d0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaad667fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaad6680240 n/a (ntpd + 0x10240) #3 0x0000aaaad667be14 n/a (ntpd + 0xbe14) #4 0x0000aaaad667d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaad6685a38 n/a (ntpd + 0x15a38) #6 0x0000aaaad667738c n/a (ntpd + 0x738c) #7 0x0000ffff8cd22034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff8cd22118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaad66773f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Oct 29 23:30:29.339056 systemd[1]: systemd-coredump@0-2028-0.service: Deactivated successfully. Oct 29 23:30:29.351708 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Oct 29 23:30:29.352005 systemd[1]: ntpd.service: Failed with result 'core-dump'. Oct 29 23:30:29.389427 update-ssh-keys[2192]: Updated "/home/core/.ssh/authorized_keys" Oct 29 23:30:29.391909 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 29 23:30:29.404760 systemd[1]: Finished sshkeys.service. Oct 29 23:30:29.503202 amazon-ssm-agent[2171]: Initializing new seelog logger Oct 29 23:30:29.508012 amazon-ssm-agent[2171]: New Seelog Logger Creation Complete Oct 29 23:30:29.508012 amazon-ssm-agent[2171]: 2025/10/29 23:30:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:29.508012 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:29.508012 amazon-ssm-agent[2171]: 2025/10/29 23:30:29 processing appconfig overrides Oct 29 23:30:29.511553 amazon-ssm-agent[2171]: 2025/10/29 23:30:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:29.511553 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:29.511553 amazon-ssm-agent[2171]: 2025/10/29 23:30:29 processing appconfig overrides Oct 29 23:30:29.516578 amazon-ssm-agent[2171]: 2025/10/29 23:30:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:29.516578 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:29.516578 amazon-ssm-agent[2171]: 2025/10/29 23:30:29 processing appconfig overrides Oct 29 23:30:29.516578 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.5108 INFO Proxy environment variables: Oct 29 23:30:29.520846 amazon-ssm-agent[2171]: 2025/10/29 23:30:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:29.522552 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:29.522887 amazon-ssm-agent[2171]: 2025/10/29 23:30:29 processing appconfig overrides Oct 29 23:30:29.565482 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Oct 29 23:30:29.572100 systemd[1]: Started ntpd.service - Network Time Service. Oct 29 23:30:29.614895 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.5109 INFO no_proxy: Oct 29 23:30:29.620335 polkitd[2155]: Started polkitd version 126 Oct 29 23:30:29.654690 polkitd[2155]: Loading rules from directory /etc/polkit-1/rules.d Oct 29 23:30:29.655328 polkitd[2155]: Loading rules from directory /run/polkit-1/rules.d Oct 29 23:30:29.655409 polkitd[2155]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 29 23:30:29.657425 polkitd[2155]: Loading rules from directory /usr/local/share/polkit-1/rules.d Oct 29 23:30:29.657507 polkitd[2155]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 29 23:30:29.657593 polkitd[2155]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 29 23:30:29.666985 polkitd[2155]: Finished loading, compiling and executing 2 rules Oct 29 23:30:29.671609 systemd[1]: Started polkit.service - Authorization Manager. Oct 29 23:30:29.684115 dbus-daemon[1973]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 29 23:30:29.691918 polkitd[2155]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 29 23:30:29.720772 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.5109 INFO https_proxy: Oct 29 23:30:29.724684 ntpd[2210]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:32:15 UTC 2025 (1): Starting Oct 29 23:30:29.727815 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: ntpd 4.2.8p18@1.4062-o Wed Oct 29 21:32:15 UTC 2025 (1): Starting Oct 29 23:30:29.729933 ntpd[2210]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 29 23:30:29.731118 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 29 23:30:29.731118 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: ---------------------------------------------------- Oct 29 23:30:29.731118 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: ntp-4 is maintained by Network Time Foundation, Oct 29 23:30:29.731118 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 29 23:30:29.731118 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: corporation. Support and training for ntp-4 are Oct 29 23:30:29.731118 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: available at https://www.nwtime.org/support Oct 29 23:30:29.731118 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: ---------------------------------------------------- Oct 29 23:30:29.729989 ntpd[2210]: ---------------------------------------------------- Oct 29 23:30:29.730010 ntpd[2210]: ntp-4 is maintained by Network Time Foundation, Oct 29 23:30:29.730028 ntpd[2210]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 29 23:30:29.730044 ntpd[2210]: corporation. Support and training for ntp-4 are Oct 29 23:30:29.730061 ntpd[2210]: available at https://www.nwtime.org/support Oct 29 23:30:29.730077 ntpd[2210]: ---------------------------------------------------- Oct 29 23:30:29.736218 ntpd[2210]: proto: precision = 0.096 usec (-23) Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: proto: precision = 0.096 usec (-23) Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: basedate set to 2025-10-17 Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: gps base set to 2025-10-19 (week 2389) Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: Listen and drop on 0 v6wildcard [::]:123 Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: Listen normally on 2 lo 127.0.0.1:123 Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: Listen normally on 3 eth0 172.31.30.28:123 Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: Listen normally on 4 lo [::1]:123 Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: Listen normally on 5 eth0 [fe80::4d9:b1ff:feb8:4dd3%2]:123 Oct 29 23:30:29.740332 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: Listening on routing socket on fd #22 for interface updates Oct 29 23:30:29.736540 ntpd[2210]: basedate set to 2025-10-17 Oct 29 23:30:29.736560 ntpd[2210]: gps base set to 2025-10-19 (week 2389) Oct 29 23:30:29.736721 ntpd[2210]: Listen and drop on 0 v6wildcard [::]:123 Oct 29 23:30:29.736764 ntpd[2210]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 29 23:30:29.737036 ntpd[2210]: Listen normally on 2 lo 127.0.0.1:123 Oct 29 23:30:29.737079 ntpd[2210]: Listen normally on 3 eth0 172.31.30.28:123 Oct 29 23:30:29.737122 ntpd[2210]: Listen normally on 4 lo [::1]:123 Oct 29 23:30:29.737165 ntpd[2210]: Listen normally on 5 eth0 [fe80::4d9:b1ff:feb8:4dd3%2]:123 Oct 29 23:30:29.737205 ntpd[2210]: Listening on routing socket on fd #22 for interface updates Oct 29 23:30:29.755855 systemd-hostnamed[2027]: Hostname set to (transient) Oct 29 23:30:29.757924 systemd-resolved[1814]: System hostname changed to 'ip-172-31-30-28'. Oct 29 23:30:29.775839 ntpd[2210]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 29 23:30:29.776541 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 29 23:30:29.776541 ntpd[2210]: 29 Oct 23:30:29 ntpd[2210]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 29 23:30:29.775907 ntpd[2210]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 29 23:30:29.817337 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.5109 INFO http_proxy: Oct 29 23:30:29.842032 containerd[2015]: time="2025-10-29T23:30:29.841984370Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 23:30:29.843903 containerd[2015]: time="2025-10-29T23:30:29.842208914Z" level=info msg="Start subscribing containerd event" Oct 29 23:30:29.843903 containerd[2015]: time="2025-10-29T23:30:29.842410970Z" level=info msg="Start recovering state" Oct 29 23:30:29.843903 containerd[2015]: time="2025-10-29T23:30:29.842553338Z" level=info msg="Start event monitor" Oct 29 23:30:29.843903 containerd[2015]: time="2025-10-29T23:30:29.842580026Z" level=info msg="Start cni network conf syncer for default" Oct 29 23:30:29.843903 containerd[2015]: time="2025-10-29T23:30:29.843821174Z" level=info msg="Start streaming server" Oct 29 23:30:29.847101 containerd[2015]: time="2025-10-29T23:30:29.844711622Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 29 23:30:29.847101 containerd[2015]: time="2025-10-29T23:30:29.844741454Z" level=info msg="runtime interface starting up..." Oct 29 23:30:29.847101 containerd[2015]: time="2025-10-29T23:30:29.844758386Z" level=info msg="starting plugins..." Oct 29 23:30:29.847101 containerd[2015]: time="2025-10-29T23:30:29.844793858Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 29 23:30:29.847101 containerd[2015]: time="2025-10-29T23:30:29.846665726Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 23:30:29.849690 containerd[2015]: time="2025-10-29T23:30:29.848744726Z" level=info msg="containerd successfully booted in 0.862196s" Oct 29 23:30:29.848860 systemd[1]: Started containerd.service - containerd container runtime. Oct 29 23:30:29.918749 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.5111 INFO Checking if agent identity type OnPrem can be assumed Oct 29 23:30:30.017776 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.5112 INFO Checking if agent identity type EC2 can be assumed Oct 29 23:30:30.116816 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.7648 INFO Agent will take identity from EC2 Oct 29 23:30:30.193685 tar[1999]: linux-arm64/README.md Oct 29 23:30:30.218692 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.7704 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Oct 29 23:30:30.225436 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 29 23:30:30.262461 sshd_keygen[2011]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 23:30:30.299761 amazon-ssm-agent[2171]: 2025/10/29 23:30:30 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:30.299761 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 29 23:30:30.300037 amazon-ssm-agent[2171]: 2025/10/29 23:30:30 processing appconfig overrides Oct 29 23:30:30.304404 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 29 23:30:30.314489 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 29 23:30:30.323549 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.7705 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Oct 29 23:30:30.325189 systemd[1]: Started sshd@0-172.31.30.28:22-139.178.89.65:52800.service - OpenSSH per-connection server daemon (139.178.89.65:52800). Oct 29 23:30:30.348892 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 23:30:30.349426 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 29 23:30:30.356615 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 29 23:30:30.401819 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 29 23:30:30.410302 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 29 23:30:30.418029 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 29 23:30:30.422062 systemd[1]: Reached target getty.target - Login Prompts. Oct 29 23:30:30.430715 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.7705 INFO [amazon-ssm-agent] Starting Core Agent Oct 29 23:30:30.528859 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.7705 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Oct 29 23:30:30.589983 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.7705 INFO [Registrar] Starting registrar module Oct 29 23:30:30.590238 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.7795 INFO [EC2Identity] Checking disk for registration info Oct 29 23:30:30.590238 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.7796 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Oct 29 23:30:30.590238 amazon-ssm-agent[2171]: 2025-10-29 23:30:29.7796 INFO [EC2Identity] Generating registration keypair Oct 29 23:30:30.590630 amazon-ssm-agent[2171]: 2025-10-29 23:30:30.2279 INFO [EC2Identity] Checking write access before registering Oct 29 23:30:30.590630 amazon-ssm-agent[2171]: 2025-10-29 23:30:30.2286 INFO [EC2Identity] Registering EC2 instance with Systems Manager Oct 29 23:30:30.590630 amazon-ssm-agent[2171]: 2025-10-29 23:30:30.2992 INFO [EC2Identity] EC2 registration was successful. Oct 29 23:30:30.590630 amazon-ssm-agent[2171]: 2025-10-29 23:30:30.2993 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Oct 29 23:30:30.590630 amazon-ssm-agent[2171]: 2025-10-29 23:30:30.2995 INFO [CredentialRefresher] credentialRefresher has started Oct 29 23:30:30.590630 amazon-ssm-agent[2171]: 2025-10-29 23:30:30.2995 INFO [CredentialRefresher] Starting credentials refresher loop Oct 29 23:30:30.590630 amazon-ssm-agent[2171]: 2025-10-29 23:30:30.5895 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 29 23:30:30.590630 amazon-ssm-agent[2171]: 2025-10-29 23:30:30.5899 INFO [CredentialRefresher] Credentials ready Oct 29 23:30:30.601076 sshd[2241]: Accepted publickey for core from 139.178.89.65 port 52800 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:30:30.605030 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:30.618427 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 29 23:30:30.625828 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 29 23:30:30.632710 amazon-ssm-agent[2171]: 2025-10-29 23:30:30.5905 INFO [CredentialRefresher] Next credential rotation will be in 29.9999842845 minutes Oct 29 23:30:30.648770 systemd-logind[1985]: New session 1 of user core. Oct 29 23:30:30.665133 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 29 23:30:30.676003 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 29 23:30:30.697643 (systemd)[2253]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 23:30:30.703872 systemd-logind[1985]: New session c1 of user core. Oct 29 23:30:30.990032 systemd[2253]: Queued start job for default target default.target. Oct 29 23:30:30.999044 systemd[2253]: Created slice app.slice - User Application Slice. Oct 29 23:30:30.999110 systemd[2253]: Reached target paths.target - Paths. Oct 29 23:30:30.999302 systemd[2253]: Reached target timers.target - Timers. Oct 29 23:30:31.001720 systemd[2253]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 29 23:30:31.033127 systemd[2253]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 29 23:30:31.033348 systemd[2253]: Reached target sockets.target - Sockets. Oct 29 23:30:31.033869 systemd[2253]: Reached target basic.target - Basic System. Oct 29 23:30:31.033970 systemd[2253]: Reached target default.target - Main User Target. Oct 29 23:30:31.034029 systemd[2253]: Startup finished in 317ms. Oct 29 23:30:31.034175 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 29 23:30:31.044922 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 29 23:30:31.203285 systemd[1]: Started sshd@1-172.31.30.28:22-139.178.89.65:52808.service - OpenSSH per-connection server daemon (139.178.89.65:52808). Oct 29 23:30:31.412751 sshd[2264]: Accepted publickey for core from 139.178.89.65 port 52808 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:30:31.415506 sshd-session[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:31.425288 systemd-logind[1985]: New session 2 of user core. Oct 29 23:30:31.437999 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 29 23:30:31.567347 sshd[2267]: Connection closed by 139.178.89.65 port 52808 Oct 29 23:30:31.568184 sshd-session[2264]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:31.574781 systemd-logind[1985]: Session 2 logged out. Waiting for processes to exit. Oct 29 23:30:31.575924 systemd[1]: sshd@1-172.31.30.28:22-139.178.89.65:52808.service: Deactivated successfully. Oct 29 23:30:31.581487 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 23:30:31.587643 systemd-logind[1985]: Removed session 2. Oct 29 23:30:31.609402 systemd[1]: Started sshd@2-172.31.30.28:22-139.178.89.65:52812.service - OpenSSH per-connection server daemon (139.178.89.65:52812). Oct 29 23:30:31.627641 amazon-ssm-agent[2171]: 2025-10-29 23:30:31.6274 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 29 23:30:31.728566 amazon-ssm-agent[2171]: 2025-10-29 23:30:31.6357 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2276) started Oct 29 23:30:31.830892 amazon-ssm-agent[2171]: 2025-10-29 23:30:31.6357 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 29 23:30:31.839339 sshd[2275]: Accepted publickey for core from 139.178.89.65 port 52812 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:30:31.842891 sshd-session[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:31.852765 systemd-logind[1985]: New session 3 of user core. Oct 29 23:30:31.855894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:30:31.866749 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 29 23:30:31.870078 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 29 23:30:31.878382 systemd[1]: Startup finished in 3.765s (kernel) + 9.401s (initrd) + 9.844s (userspace) = 23.011s. Oct 29 23:30:31.889338 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:30:32.002429 sshd[2290]: Connection closed by 139.178.89.65 port 52812 Oct 29 23:30:32.003772 sshd-session[2275]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:32.016341 systemd[1]: sshd@2-172.31.30.28:22-139.178.89.65:52812.service: Deactivated successfully. Oct 29 23:30:32.020918 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 23:30:32.023432 systemd-logind[1985]: Session 3 logged out. Waiting for processes to exit. Oct 29 23:30:32.026428 systemd-logind[1985]: Removed session 3. Oct 29 23:30:33.066515 kubelet[2288]: E1029 23:30:33.066452 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:30:33.071050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:30:33.071396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:30:33.072026 systemd[1]: kubelet.service: Consumed 1.448s CPU time, 257.5M memory peak. Oct 29 23:30:42.047030 systemd[1]: Started sshd@3-172.31.30.28:22-139.178.89.65:45070.service - OpenSSH per-connection server daemon (139.178.89.65:45070). Oct 29 23:30:42.253245 sshd[2310]: Accepted publickey for core from 139.178.89.65 port 45070 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:30:42.255542 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:42.264738 systemd-logind[1985]: New session 4 of user core. Oct 29 23:30:42.271926 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 29 23:30:42.396445 sshd[2313]: Connection closed by 139.178.89.65 port 45070 Oct 29 23:30:42.397243 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:42.403410 systemd[1]: sshd@3-172.31.30.28:22-139.178.89.65:45070.service: Deactivated successfully. Oct 29 23:30:42.406407 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 23:30:42.410533 systemd-logind[1985]: Session 4 logged out. Waiting for processes to exit. Oct 29 23:30:42.412892 systemd-logind[1985]: Removed session 4. Oct 29 23:30:42.433089 systemd[1]: Started sshd@4-172.31.30.28:22-139.178.89.65:45076.service - OpenSSH per-connection server daemon (139.178.89.65:45076). Oct 29 23:30:42.615932 sshd[2319]: Accepted publickey for core from 139.178.89.65 port 45076 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:30:42.618373 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:42.629742 systemd-logind[1985]: New session 5 of user core. Oct 29 23:30:42.638024 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 29 23:30:42.753776 sshd[2322]: Connection closed by 139.178.89.65 port 45076 Oct 29 23:30:42.755142 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:42.764156 systemd[1]: sshd@4-172.31.30.28:22-139.178.89.65:45076.service: Deactivated successfully. Oct 29 23:30:42.768504 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 23:30:42.770589 systemd-logind[1985]: Session 5 logged out. Waiting for processes to exit. Oct 29 23:30:42.774445 systemd-logind[1985]: Removed session 5. Oct 29 23:30:42.790997 systemd[1]: Started sshd@5-172.31.30.28:22-139.178.89.65:45086.service - OpenSSH per-connection server daemon (139.178.89.65:45086). Oct 29 23:30:43.006696 sshd[2328]: Accepted publickey for core from 139.178.89.65 port 45086 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:30:43.008736 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:43.019423 systemd-logind[1985]: New session 6 of user core. Oct 29 23:30:43.028007 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 29 23:30:43.134422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 23:30:43.138707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:30:43.156704 sshd[2331]: Connection closed by 139.178.89.65 port 45086 Oct 29 23:30:43.157932 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:43.167624 systemd-logind[1985]: Session 6 logged out. Waiting for processes to exit. Oct 29 23:30:43.167869 systemd[1]: sshd@5-172.31.30.28:22-139.178.89.65:45086.service: Deactivated successfully. Oct 29 23:30:43.171340 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 23:30:43.195620 systemd-logind[1985]: Removed session 6. Oct 29 23:30:43.196469 systemd[1]: Started sshd@6-172.31.30.28:22-139.178.89.65:45098.service - OpenSSH per-connection server daemon (139.178.89.65:45098). Oct 29 23:30:43.413389 sshd[2340]: Accepted publickey for core from 139.178.89.65 port 45098 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:30:43.416752 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:43.426602 systemd-logind[1985]: New session 7 of user core. Oct 29 23:30:43.429959 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 29 23:30:43.546491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:30:43.563304 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:30:43.575850 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 23:30:43.577110 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:30:43.592204 sudo[2346]: pam_unix(sudo:session): session closed for user root Oct 29 23:30:43.617140 sshd[2343]: Connection closed by 139.178.89.65 port 45098 Oct 29 23:30:43.620043 sshd-session[2340]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:43.631126 systemd[1]: sshd@6-172.31.30.28:22-139.178.89.65:45098.service: Deactivated successfully. Oct 29 23:30:43.637834 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 23:30:43.639548 systemd-logind[1985]: Session 7 logged out. Waiting for processes to exit. Oct 29 23:30:43.661513 systemd[1]: Started sshd@7-172.31.30.28:22-139.178.89.65:45108.service - OpenSSH per-connection server daemon (139.178.89.65:45108). Oct 29 23:30:43.665980 systemd-logind[1985]: Removed session 7. Oct 29 23:30:43.679121 kubelet[2349]: E1029 23:30:43.679023 2349 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:30:43.690510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:30:43.691918 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:30:43.692693 systemd[1]: kubelet.service: Consumed 327ms CPU time, 104.9M memory peak. Oct 29 23:30:43.858156 sshd[2361]: Accepted publickey for core from 139.178.89.65 port 45108 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:30:43.860798 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:43.869279 systemd-logind[1985]: New session 8 of user core. Oct 29 23:30:43.881974 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 29 23:30:43.985124 sudo[2367]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 23:30:43.986340 sudo[2367]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:30:43.994092 sudo[2367]: pam_unix(sudo:session): session closed for user root Oct 29 23:30:44.004133 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 29 23:30:44.004769 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:30:44.021217 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 29 23:30:44.079050 augenrules[2389]: No rules Oct 29 23:30:44.081606 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 23:30:44.082237 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 29 23:30:44.084180 sudo[2366]: pam_unix(sudo:session): session closed for user root Oct 29 23:30:44.108628 sshd[2365]: Connection closed by 139.178.89.65 port 45108 Oct 29 23:30:44.108517 sshd-session[2361]: pam_unix(sshd:session): session closed for user core Oct 29 23:30:44.115687 systemd[1]: sshd@7-172.31.30.28:22-139.178.89.65:45108.service: Deactivated successfully. Oct 29 23:30:44.119507 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 23:30:44.122123 systemd-logind[1985]: Session 8 logged out. Waiting for processes to exit. Oct 29 23:30:44.124390 systemd-logind[1985]: Removed session 8. Oct 29 23:30:44.143794 systemd[1]: Started sshd@8-172.31.30.28:22-139.178.89.65:45114.service - OpenSSH per-connection server daemon (139.178.89.65:45114). Oct 29 23:30:44.328003 sshd[2398]: Accepted publickey for core from 139.178.89.65 port 45114 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:30:44.329630 sshd-session[2398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:30:44.337402 systemd-logind[1985]: New session 9 of user core. Oct 29 23:30:44.352883 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 29 23:30:44.454152 sudo[2402]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 23:30:44.454830 sudo[2402]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 29 23:30:45.173946 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 29 23:30:45.199520 (dockerd)[2420]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 29 23:30:45.727353 dockerd[2420]: time="2025-10-29T23:30:45.727283152Z" level=info msg="Starting up" Oct 29 23:30:45.729392 dockerd[2420]: time="2025-10-29T23:30:45.729347104Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 29 23:30:45.750137 dockerd[2420]: time="2025-10-29T23:30:45.750080037Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 29 23:30:45.801058 dockerd[2420]: time="2025-10-29T23:30:45.800998888Z" level=info msg="Loading containers: start." Oct 29 23:30:45.815719 kernel: Initializing XFRM netlink socket Oct 29 23:30:46.182047 (udev-worker)[2442]: Network interface NamePolicy= disabled on kernel command line. Oct 29 23:30:46.253833 systemd-networkd[1811]: docker0: Link UP Oct 29 23:30:46.258804 dockerd[2420]: time="2025-10-29T23:30:46.258736190Z" level=info msg="Loading containers: done." Oct 29 23:30:46.288361 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3598673236-merged.mount: Deactivated successfully. Oct 29 23:30:46.291822 dockerd[2420]: time="2025-10-29T23:30:46.291758177Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 23:30:46.291965 dockerd[2420]: time="2025-10-29T23:30:46.291876208Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 29 23:30:46.292077 dockerd[2420]: time="2025-10-29T23:30:46.292044676Z" level=info msg="Initializing buildkit" Oct 29 23:30:46.329746 dockerd[2420]: time="2025-10-29T23:30:46.329631860Z" level=info msg="Completed buildkit initialization" Oct 29 23:30:46.345185 dockerd[2420]: time="2025-10-29T23:30:46.345106934Z" level=info msg="Daemon has completed initialization" Oct 29 23:30:46.346362 dockerd[2420]: time="2025-10-29T23:30:46.346149883Z" level=info msg="API listen on /run/docker.sock" Oct 29 23:30:46.346635 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 29 23:30:47.694669 containerd[2015]: time="2025-10-29T23:30:47.694578030Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 29 23:30:48.267835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426866580.mount: Deactivated successfully. Oct 29 23:30:49.661192 containerd[2015]: time="2025-10-29T23:30:49.661105172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:49.664141 containerd[2015]: time="2025-10-29T23:30:49.664073715Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Oct 29 23:30:49.665100 containerd[2015]: time="2025-10-29T23:30:49.665041075Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:49.671070 containerd[2015]: time="2025-10-29T23:30:49.670998212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:49.674524 containerd[2015]: time="2025-10-29T23:30:49.674143652Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.979502855s" Oct 29 23:30:49.674524 containerd[2015]: time="2025-10-29T23:30:49.674203189Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 29 23:30:49.675112 containerd[2015]: time="2025-10-29T23:30:49.675054847Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 29 23:30:51.046458 containerd[2015]: time="2025-10-29T23:30:51.046376815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:51.048854 containerd[2015]: time="2025-10-29T23:30:51.048789805Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Oct 29 23:30:51.049945 containerd[2015]: time="2025-10-29T23:30:51.049873779Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:51.054464 containerd[2015]: time="2025-10-29T23:30:51.054385753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:51.056714 containerd[2015]: time="2025-10-29T23:30:51.056421923Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.381304957s" Oct 29 23:30:51.056714 containerd[2015]: time="2025-10-29T23:30:51.056537745Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 29 23:30:51.057778 containerd[2015]: time="2025-10-29T23:30:51.057737757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 29 23:30:52.220685 containerd[2015]: time="2025-10-29T23:30:52.220254469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:52.222604 containerd[2015]: time="2025-10-29T23:30:52.222559585Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Oct 29 23:30:52.223023 containerd[2015]: time="2025-10-29T23:30:52.222986782Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:52.227766 containerd[2015]: time="2025-10-29T23:30:52.227712848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:52.230004 containerd[2015]: time="2025-10-29T23:30:52.229944523Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.172015679s" Oct 29 23:30:52.230004 containerd[2015]: time="2025-10-29T23:30:52.230001828Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 29 23:30:52.231517 containerd[2015]: time="2025-10-29T23:30:52.231446798Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 29 23:30:53.414171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348968865.mount: Deactivated successfully. Oct 29 23:30:53.924631 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 23:30:53.929812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:30:54.062677 containerd[2015]: time="2025-10-29T23:30:54.060183437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:54.062677 containerd[2015]: time="2025-10-29T23:30:54.061323222Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Oct 29 23:30:54.063278 containerd[2015]: time="2025-10-29T23:30:54.063213462Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:54.073229 containerd[2015]: time="2025-10-29T23:30:54.073147206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:54.074056 containerd[2015]: time="2025-10-29T23:30:54.073990309Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.842460935s" Oct 29 23:30:54.074056 containerd[2015]: time="2025-10-29T23:30:54.074051801Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 29 23:30:54.075713 containerd[2015]: time="2025-10-29T23:30:54.075629321Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 29 23:30:54.297085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:30:54.310148 (kubelet)[2716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:30:54.382804 kubelet[2716]: E1029 23:30:54.382710 2716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:30:54.387261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:30:54.387789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:30:54.388780 systemd[1]: kubelet.service: Consumed 300ms CPU time, 105.4M memory peak. Oct 29 23:30:54.602069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781073278.mount: Deactivated successfully. Oct 29 23:30:55.827672 containerd[2015]: time="2025-10-29T23:30:55.827594182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:55.829813 containerd[2015]: time="2025-10-29T23:30:55.829696114Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Oct 29 23:30:55.830462 containerd[2015]: time="2025-10-29T23:30:55.830426698Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:55.836831 containerd[2015]: time="2025-10-29T23:30:55.836779558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:55.838885 containerd[2015]: time="2025-10-29T23:30:55.838842130Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.763125692s" Oct 29 23:30:55.839059 containerd[2015]: time="2025-10-29T23:30:55.839030278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 29 23:30:55.839972 containerd[2015]: time="2025-10-29T23:30:55.839918878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 23:30:56.276081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount775743853.mount: Deactivated successfully. Oct 29 23:30:56.283566 containerd[2015]: time="2025-10-29T23:30:56.283483352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:30:56.285861 containerd[2015]: time="2025-10-29T23:30:56.285804308Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Oct 29 23:30:56.287320 containerd[2015]: time="2025-10-29T23:30:56.287246096Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:30:56.290627 containerd[2015]: time="2025-10-29T23:30:56.290550176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 29 23:30:56.292694 containerd[2015]: time="2025-10-29T23:30:56.292011356Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 452.036954ms" Oct 29 23:30:56.292694 containerd[2015]: time="2025-10-29T23:30:56.292063976Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 29 23:30:56.293481 containerd[2015]: time="2025-10-29T23:30:56.293080472Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 29 23:30:56.785160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331804038.mount: Deactivated successfully. Oct 29 23:30:58.744854 containerd[2015]: time="2025-10-29T23:30:58.744774384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:58.747399 containerd[2015]: time="2025-10-29T23:30:58.747347340Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Oct 29 23:30:58.749681 containerd[2015]: time="2025-10-29T23:30:58.749345028Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:58.754801 containerd[2015]: time="2025-10-29T23:30:58.754752972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:30:58.758237 containerd[2015]: time="2025-10-29T23:30:58.758173140Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.46504942s" Oct 29 23:30:58.758237 containerd[2015]: time="2025-10-29T23:30:58.758233488Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 29 23:30:59.792140 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 29 23:31:04.424872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 29 23:31:04.428984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:04.978898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:04.998149 (kubelet)[2864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 29 23:31:05.073127 kubelet[2864]: E1029 23:31:05.073013 2864 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 23:31:05.077872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 23:31:05.078345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 23:31:05.079306 systemd[1]: kubelet.service: Consumed 294ms CPU time, 104.9M memory peak. Oct 29 23:31:06.411499 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:06.412734 systemd[1]: kubelet.service: Consumed 294ms CPU time, 104.9M memory peak. Oct 29 23:31:06.417075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:06.465925 systemd[1]: Reload requested from client PID 2878 ('systemctl') (unit session-9.scope)... Oct 29 23:31:06.465949 systemd[1]: Reloading... Oct 29 23:31:06.672687 zram_generator::config[2925]: No configuration found. Oct 29 23:31:07.135001 systemd[1]: Reloading finished in 668 ms. Oct 29 23:31:07.251600 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 29 23:31:07.252121 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 29 23:31:07.253770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:07.253865 systemd[1]: kubelet.service: Consumed 213ms CPU time, 95M memory peak. Oct 29 23:31:07.257796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:07.588597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:07.604176 (kubelet)[2986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 23:31:07.683337 kubelet[2986]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:31:07.683889 kubelet[2986]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 23:31:07.683993 kubelet[2986]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:31:07.684362 kubelet[2986]: I1029 23:31:07.684306 2986 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 23:31:10.082215 kubelet[2986]: I1029 23:31:10.082160 2986 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 23:31:10.082873 kubelet[2986]: I1029 23:31:10.082843 2986 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 23:31:10.083860 kubelet[2986]: I1029 23:31:10.083821 2986 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 23:31:10.138560 kubelet[2986]: E1029 23:31:10.138492 2986 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.28:6443: connect: connection refused" logger="UnhandledError" Oct 29 23:31:10.140820 kubelet[2986]: I1029 23:31:10.140699 2986 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 23:31:10.151989 kubelet[2986]: I1029 23:31:10.151948 2986 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 23:31:10.158323 kubelet[2986]: I1029 23:31:10.158255 2986 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 23:31:10.158783 kubelet[2986]: I1029 23:31:10.158730 2986 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 23:31:10.159087 kubelet[2986]: I1029 23:31:10.158785 2986 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 23:31:10.159259 kubelet[2986]: I1029 23:31:10.159238 2986 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 23:31:10.159322 kubelet[2986]: I1029 23:31:10.159261 2986 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 23:31:10.159610 kubelet[2986]: I1029 23:31:10.159582 2986 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:31:10.166222 kubelet[2986]: I1029 23:31:10.166052 2986 kubelet.go:446] "Attempting to sync node with API server" Oct 29 23:31:10.166222 kubelet[2986]: I1029 23:31:10.166104 2986 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 23:31:10.168703 kubelet[2986]: I1029 23:31:10.168180 2986 kubelet.go:352] "Adding apiserver pod source" Oct 29 23:31:10.168703 kubelet[2986]: I1029 23:31:10.168220 2986 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 23:31:10.175915 kubelet[2986]: W1029 23:31:10.175813 2986 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-28&limit=500&resourceVersion=0": dial tcp 172.31.30.28:6443: connect: connection refused Oct 29 23:31:10.176063 kubelet[2986]: E1029 23:31:10.175985 2986 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-28&limit=500&resourceVersion=0\": dial tcp 172.31.30.28:6443: connect: connection refused" logger="UnhandledError" Oct 29 23:31:10.177268 kubelet[2986]: W1029 23:31:10.176993 2986 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.28:6443: connect: connection refused Oct 29 23:31:10.177268 kubelet[2986]: E1029 23:31:10.177140 2986 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.28:6443: connect: connection refused" logger="UnhandledError" Oct 29 23:31:10.177483 kubelet[2986]: I1029 23:31:10.177352 2986 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 23:31:10.178465 kubelet[2986]: I1029 23:31:10.178425 2986 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 23:31:10.178717 kubelet[2986]: W1029 23:31:10.178686 2986 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 23:31:10.181555 kubelet[2986]: I1029 23:31:10.181510 2986 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 23:31:10.181744 kubelet[2986]: I1029 23:31:10.181590 2986 server.go:1287] "Started kubelet" Oct 29 23:31:10.194959 kubelet[2986]: E1029 23:31:10.193817 2986 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.28:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-28.18731a2f6a520819 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-28,UID:ip-172-31-30-28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-28,},FirstTimestamp:2025-10-29 23:31:10.181541913 +0000 UTC m=+2.570574542,LastTimestamp:2025-10-29 23:31:10.181541913 +0000 UTC m=+2.570574542,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-28,}" Oct 29 23:31:10.195810 kubelet[2986]: I1029 23:31:10.195707 2986 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 23:31:10.196353 kubelet[2986]: I1029 23:31:10.196316 2986 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 23:31:10.197722 kubelet[2986]: I1029 23:31:10.197607 2986 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 23:31:10.198185 kubelet[2986]: I1029 23:31:10.198138 2986 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 23:31:10.198422 kubelet[2986]: I1029 23:31:10.198392 2986 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 23:31:10.199224 kubelet[2986]: I1029 23:31:10.199184 2986 server.go:479] "Adding debug handlers to kubelet server" Oct 29 23:31:10.202563 kubelet[2986]: I1029 23:31:10.202527 2986 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 23:31:10.203799 kubelet[2986]: E1029 23:31:10.203045 2986 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-28\" not found" Oct 29 23:31:10.204139 kubelet[2986]: I1029 23:31:10.204111 2986 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 23:31:10.204376 kubelet[2986]: I1029 23:31:10.204345 2986 reconciler.go:26] "Reconciler: start to sync state" Oct 29 23:31:10.207977 kubelet[2986]: W1029 23:31:10.207900 2986 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.28:6443: connect: connection refused Oct 29 23:31:10.208726 kubelet[2986]: E1029 23:31:10.208595 2986 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.28:6443: connect: connection refused" logger="UnhandledError" Oct 29 23:31:10.209332 kubelet[2986]: E1029 23:31:10.209232 2986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-28?timeout=10s\": dial tcp 172.31.30.28:6443: connect: connection refused" interval="200ms" Oct 29 23:31:10.212826 kubelet[2986]: I1029 23:31:10.212232 2986 factory.go:221] Registration of the systemd container factory successfully Oct 29 23:31:10.212826 kubelet[2986]: I1029 23:31:10.212390 2986 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 23:31:10.215613 kubelet[2986]: I1029 23:31:10.215575 2986 factory.go:221] Registration of the containerd container factory successfully Oct 29 23:31:10.233827 kubelet[2986]: I1029 23:31:10.233750 2986 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 23:31:10.237488 kubelet[2986]: I1029 23:31:10.237416 2986 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 23:31:10.237488 kubelet[2986]: I1029 23:31:10.237466 2986 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 23:31:10.237747 kubelet[2986]: I1029 23:31:10.237499 2986 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 23:31:10.237747 kubelet[2986]: I1029 23:31:10.237513 2986 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 23:31:10.237747 kubelet[2986]: E1029 23:31:10.237581 2986 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 23:31:10.251495 kubelet[2986]: W1029 23:31:10.251345 2986 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.28:6443: connect: connection refused Oct 29 23:31:10.251753 kubelet[2986]: E1029 23:31:10.251715 2986 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.28:6443: connect: connection refused" logger="UnhandledError" Oct 29 23:31:10.253121 kubelet[2986]: E1029 23:31:10.253056 2986 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 23:31:10.262317 kubelet[2986]: I1029 23:31:10.262265 2986 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 23:31:10.262317 kubelet[2986]: I1029 23:31:10.262308 2986 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 23:31:10.262548 kubelet[2986]: I1029 23:31:10.262369 2986 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:31:10.265438 kubelet[2986]: I1029 23:31:10.265389 2986 policy_none.go:49] "None policy: Start" Oct 29 23:31:10.265438 kubelet[2986]: I1029 23:31:10.265429 2986 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 23:31:10.265596 kubelet[2986]: I1029 23:31:10.265455 2986 state_mem.go:35] "Initializing new in-memory state store" Oct 29 23:31:10.275862 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 29 23:31:10.294195 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 29 23:31:10.303688 kubelet[2986]: E1029 23:31:10.303544 2986 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-28\" not found" Oct 29 23:31:10.310472 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 29 23:31:10.313359 kubelet[2986]: I1029 23:31:10.313316 2986 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 23:31:10.313691 kubelet[2986]: I1029 23:31:10.313630 2986 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 23:31:10.313768 kubelet[2986]: I1029 23:31:10.313687 2986 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 23:31:10.317274 kubelet[2986]: I1029 23:31:10.317194 2986 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 23:31:10.319307 kubelet[2986]: E1029 23:31:10.319113 2986 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 23:31:10.319307 kubelet[2986]: E1029 23:31:10.319176 2986 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-28\" not found" Oct 29 23:31:10.359678 systemd[1]: Created slice kubepods-burstable-poddb04d55b3b53fdc31f94876995c5e3da.slice - libcontainer container kubepods-burstable-poddb04d55b3b53fdc31f94876995c5e3da.slice. Oct 29 23:31:10.372405 kubelet[2986]: E1029 23:31:10.372354 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:10.376738 systemd[1]: Created slice kubepods-burstable-pod48542c11e5a723b55b11807b28af7286.slice - libcontainer container kubepods-burstable-pod48542c11e5a723b55b11807b28af7286.slice. Oct 29 23:31:10.383004 kubelet[2986]: E1029 23:31:10.382884 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:10.389289 systemd[1]: Created slice kubepods-burstable-pod02ce712ed81d61aac5a226730878bc79.slice - libcontainer container kubepods-burstable-pod02ce712ed81d61aac5a226730878bc79.slice. Oct 29 23:31:10.393752 kubelet[2986]: E1029 23:31:10.393366 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:10.405617 kubelet[2986]: I1029 23:31:10.405578 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db04d55b3b53fdc31f94876995c5e3da-ca-certs\") pod \"kube-apiserver-ip-172-31-30-28\" (UID: \"db04d55b3b53fdc31f94876995c5e3da\") " pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:10.405814 kubelet[2986]: I1029 23:31:10.405788 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:10.405943 kubelet[2986]: I1029 23:31:10.405920 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:10.406142 kubelet[2986]: I1029 23:31:10.406068 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:10.406235 kubelet[2986]: I1029 23:31:10.406144 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:10.406235 kubelet[2986]: I1029 23:31:10.406209 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02ce712ed81d61aac5a226730878bc79-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-28\" (UID: \"02ce712ed81d61aac5a226730878bc79\") " pod="kube-system/kube-scheduler-ip-172-31-30-28" Oct 29 23:31:10.406356 kubelet[2986]: I1029 23:31:10.406246 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db04d55b3b53fdc31f94876995c5e3da-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-28\" (UID: \"db04d55b3b53fdc31f94876995c5e3da\") " pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:10.406413 kubelet[2986]: I1029 23:31:10.406330 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db04d55b3b53fdc31f94876995c5e3da-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-28\" (UID: \"db04d55b3b53fdc31f94876995c5e3da\") " pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:10.406413 kubelet[2986]: I1029 23:31:10.406394 2986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:10.411226 kubelet[2986]: E1029 23:31:10.411141 2986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-28?timeout=10s\": dial tcp 172.31.30.28:6443: connect: connection refused" interval="400ms" Oct 29 23:31:10.420450 kubelet[2986]: I1029 23:31:10.420405 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-28" Oct 29 23:31:10.421177 kubelet[2986]: E1029 23:31:10.421128 2986 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.28:6443/api/v1/nodes\": dial tcp 172.31.30.28:6443: connect: connection refused" node="ip-172-31-30-28" Oct 29 23:31:10.623628 kubelet[2986]: I1029 23:31:10.623504 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-28" Oct 29 23:31:10.624624 kubelet[2986]: E1029 23:31:10.624495 2986 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.28:6443/api/v1/nodes\": dial tcp 172.31.30.28:6443: connect: connection refused" node="ip-172-31-30-28" Oct 29 23:31:10.674978 containerd[2015]: time="2025-10-29T23:31:10.674915819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-28,Uid:db04d55b3b53fdc31f94876995c5e3da,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:10.686691 containerd[2015]: time="2025-10-29T23:31:10.686608871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-28,Uid:48542c11e5a723b55b11807b28af7286,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:10.712183 containerd[2015]: time="2025-10-29T23:31:10.712094508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-28,Uid:02ce712ed81d61aac5a226730878bc79,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:10.727864 containerd[2015]: time="2025-10-29T23:31:10.725719140Z" level=info msg="connecting to shim 5e55f95c4703eb5653ae7442406c155762ec9c0ab4e1b6dbd19f05a0c576fe00" address="unix:///run/containerd/s/d326392ac81fae4b169119cefa3be74744859d50b70aa48e0e737a76e010eddf" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:10.784492 containerd[2015]: time="2025-10-29T23:31:10.783644484Z" level=info msg="connecting to shim db855f637184d7c01eced28a41928d4dc5c72242337f35e7669297415d6e0add" address="unix:///run/containerd/s/3863b3d2526d9f775381d79a69f718fe61b50e7d4810e5d0b587843e65733490" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:10.793060 containerd[2015]: time="2025-10-29T23:31:10.792643992Z" level=info msg="connecting to shim 7fb325a92a9081acdc8162b67f1ae9391899759055a7291122f4e6ffacb79080" address="unix:///run/containerd/s/7e866d3b76445c102bc7af9916b5fdc746c3555e3ff3a73f07828eecc34d8fd1" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:10.795088 systemd[1]: Started cri-containerd-5e55f95c4703eb5653ae7442406c155762ec9c0ab4e1b6dbd19f05a0c576fe00.scope - libcontainer container 5e55f95c4703eb5653ae7442406c155762ec9c0ab4e1b6dbd19f05a0c576fe00. Oct 29 23:31:10.812897 kubelet[2986]: E1029 23:31:10.812760 2986 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-28?timeout=10s\": dial tcp 172.31.30.28:6443: connect: connection refused" interval="800ms" Oct 29 23:31:10.872021 systemd[1]: Started cri-containerd-db855f637184d7c01eced28a41928d4dc5c72242337f35e7669297415d6e0add.scope - libcontainer container db855f637184d7c01eced28a41928d4dc5c72242337f35e7669297415d6e0add. Oct 29 23:31:10.900934 systemd[1]: Started cri-containerd-7fb325a92a9081acdc8162b67f1ae9391899759055a7291122f4e6ffacb79080.scope - libcontainer container 7fb325a92a9081acdc8162b67f1ae9391899759055a7291122f4e6ffacb79080. Oct 29 23:31:10.957484 containerd[2015]: time="2025-10-29T23:31:10.956463877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-28,Uid:db04d55b3b53fdc31f94876995c5e3da,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e55f95c4703eb5653ae7442406c155762ec9c0ab4e1b6dbd19f05a0c576fe00\"" Oct 29 23:31:10.989691 containerd[2015]: time="2025-10-29T23:31:10.989326789Z" level=info msg="CreateContainer within sandbox \"5e55f95c4703eb5653ae7442406c155762ec9c0ab4e1b6dbd19f05a0c576fe00\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 23:31:11.001615 containerd[2015]: time="2025-10-29T23:31:11.001549653Z" level=info msg="Container ceb3d3fa5e5b8ecba27690e929ae8beadac4a37983e6a7efa5ba1fa3a4a040ca: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:11.012593 containerd[2015]: time="2025-10-29T23:31:11.012523413Z" level=info msg="CreateContainer within sandbox \"5e55f95c4703eb5653ae7442406c155762ec9c0ab4e1b6dbd19f05a0c576fe00\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ceb3d3fa5e5b8ecba27690e929ae8beadac4a37983e6a7efa5ba1fa3a4a040ca\"" Oct 29 23:31:11.015117 containerd[2015]: time="2025-10-29T23:31:11.015047625Z" level=info msg="StartContainer for \"ceb3d3fa5e5b8ecba27690e929ae8beadac4a37983e6a7efa5ba1fa3a4a040ca\"" Oct 29 23:31:11.018681 containerd[2015]: time="2025-10-29T23:31:11.018531489Z" level=info msg="connecting to shim ceb3d3fa5e5b8ecba27690e929ae8beadac4a37983e6a7efa5ba1fa3a4a040ca" address="unix:///run/containerd/s/d326392ac81fae4b169119cefa3be74744859d50b70aa48e0e737a76e010eddf" protocol=ttrpc version=3 Oct 29 23:31:11.029691 kubelet[2986]: I1029 23:31:11.029379 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-28" Oct 29 23:31:11.031079 kubelet[2986]: E1029 23:31:11.030999 2986 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.28:6443/api/v1/nodes\": dial tcp 172.31.30.28:6443: connect: connection refused" node="ip-172-31-30-28" Oct 29 23:31:11.086820 containerd[2015]: time="2025-10-29T23:31:11.086628741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-28,Uid:02ce712ed81d61aac5a226730878bc79,Namespace:kube-system,Attempt:0,} returns sandbox id \"db855f637184d7c01eced28a41928d4dc5c72242337f35e7669297415d6e0add\"" Oct 29 23:31:11.097643 containerd[2015]: time="2025-10-29T23:31:11.097044753Z" level=info msg="CreateContainer within sandbox \"db855f637184d7c01eced28a41928d4dc5c72242337f35e7669297415d6e0add\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 23:31:11.100119 systemd[1]: Started cri-containerd-ceb3d3fa5e5b8ecba27690e929ae8beadac4a37983e6a7efa5ba1fa3a4a040ca.scope - libcontainer container ceb3d3fa5e5b8ecba27690e929ae8beadac4a37983e6a7efa5ba1fa3a4a040ca. Oct 29 23:31:11.107700 containerd[2015]: time="2025-10-29T23:31:11.107572498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-28,Uid:48542c11e5a723b55b11807b28af7286,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fb325a92a9081acdc8162b67f1ae9391899759055a7291122f4e6ffacb79080\"" Oct 29 23:31:11.119406 containerd[2015]: time="2025-10-29T23:31:11.119318518Z" level=info msg="CreateContainer within sandbox \"7fb325a92a9081acdc8162b67f1ae9391899759055a7291122f4e6ffacb79080\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 23:31:11.125228 containerd[2015]: time="2025-10-29T23:31:11.125090950Z" level=info msg="Container aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:11.138498 containerd[2015]: time="2025-10-29T23:31:11.138423886Z" level=info msg="CreateContainer within sandbox \"db855f637184d7c01eced28a41928d4dc5c72242337f35e7669297415d6e0add\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d\"" Oct 29 23:31:11.139996 containerd[2015]: time="2025-10-29T23:31:11.139932394Z" level=info msg="StartContainer for \"aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d\"" Oct 29 23:31:11.142093 containerd[2015]: time="2025-10-29T23:31:11.142012642Z" level=info msg="connecting to shim aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d" address="unix:///run/containerd/s/3863b3d2526d9f775381d79a69f718fe61b50e7d4810e5d0b587843e65733490" protocol=ttrpc version=3 Oct 29 23:31:11.146874 containerd[2015]: time="2025-10-29T23:31:11.146756674Z" level=info msg="Container 753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:11.181661 containerd[2015]: time="2025-10-29T23:31:11.181588138Z" level=info msg="CreateContainer within sandbox \"7fb325a92a9081acdc8162b67f1ae9391899759055a7291122f4e6ffacb79080\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86\"" Oct 29 23:31:11.186149 containerd[2015]: time="2025-10-29T23:31:11.186089050Z" level=info msg="StartContainer for \"753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86\"" Oct 29 23:31:11.193050 containerd[2015]: time="2025-10-29T23:31:11.192881194Z" level=info msg="connecting to shim 753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86" address="unix:///run/containerd/s/7e866d3b76445c102bc7af9916b5fdc746c3555e3ff3a73f07828eecc34d8fd1" protocol=ttrpc version=3 Oct 29 23:31:11.223018 systemd[1]: Started cri-containerd-aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d.scope - libcontainer container aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d. Oct 29 23:31:11.255468 systemd[1]: Started cri-containerd-753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86.scope - libcontainer container 753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86. Oct 29 23:31:11.275531 containerd[2015]: time="2025-10-29T23:31:11.275343682Z" level=info msg="StartContainer for \"ceb3d3fa5e5b8ecba27690e929ae8beadac4a37983e6a7efa5ba1fa3a4a040ca\" returns successfully" Oct 29 23:31:11.303488 kubelet[2986]: E1029 23:31:11.303448 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:11.368689 kubelet[2986]: W1029 23:31:11.368404 2986 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.28:6443: connect: connection refused Oct 29 23:31:11.369823 kubelet[2986]: E1029 23:31:11.369770 2986 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.28:6443: connect: connection refused" logger="UnhandledError" Oct 29 23:31:11.435441 containerd[2015]: time="2025-10-29T23:31:11.434940767Z" level=info msg="StartContainer for \"753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86\" returns successfully" Oct 29 23:31:11.458009 kubelet[2986]: W1029 23:31:11.457858 2986 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.28:6443: connect: connection refused Oct 29 23:31:11.458009 kubelet[2986]: E1029 23:31:11.457951 2986 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.28:6443: connect: connection refused" logger="UnhandledError" Oct 29 23:31:11.474422 containerd[2015]: time="2025-10-29T23:31:11.474360095Z" level=info msg="StartContainer for \"aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d\" returns successfully" Oct 29 23:31:11.834820 kubelet[2986]: I1029 23:31:11.834525 2986 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-28" Oct 29 23:31:12.310817 kubelet[2986]: E1029 23:31:12.310436 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:12.320225 kubelet[2986]: E1029 23:31:12.319853 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:12.331040 kubelet[2986]: E1029 23:31:12.331005 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:12.867879 update_engine[1990]: I20251029 23:31:12.867796 1990 update_attempter.cc:509] Updating boot flags... Oct 29 23:31:13.352546 kubelet[2986]: E1029 23:31:13.352508 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:13.356886 kubelet[2986]: E1029 23:31:13.356468 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:13.357406 kubelet[2986]: E1029 23:31:13.356672 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:14.335681 kubelet[2986]: E1029 23:31:14.335432 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:14.339879 kubelet[2986]: E1029 23:31:14.338224 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:15.723002 kubelet[2986]: E1029 23:31:15.722953 2986 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:15.871763 kubelet[2986]: E1029 23:31:15.871693 2986 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-28\" not found" node="ip-172-31-30-28" Oct 29 23:31:15.989700 kubelet[2986]: I1029 23:31:15.987049 2986 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-28" Oct 29 23:31:16.004288 kubelet[2986]: I1029 23:31:16.004207 2986 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-28" Oct 29 23:31:16.036783 kubelet[2986]: E1029 23:31:16.036715 2986 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-28" Oct 29 23:31:16.037321 kubelet[2986]: I1029 23:31:16.036971 2986 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:16.048062 kubelet[2986]: E1029 23:31:16.048022 2986 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:16.048588 kubelet[2986]: I1029 23:31:16.048259 2986 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:16.054815 kubelet[2986]: E1029 23:31:16.054772 2986 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:16.179865 kubelet[2986]: I1029 23:31:16.179800 2986 apiserver.go:52] "Watching apiserver" Oct 29 23:31:16.204970 kubelet[2986]: I1029 23:31:16.204899 2986 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 23:31:18.099069 systemd[1]: Reload requested from client PID 3440 ('systemctl') (unit session-9.scope)... Oct 29 23:31:18.099093 systemd[1]: Reloading... Oct 29 23:31:18.447713 zram_generator::config[3494]: No configuration found. Oct 29 23:31:18.928549 systemd[1]: Reloading finished in 828 ms. Oct 29 23:31:18.993308 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:19.008824 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 23:31:19.009310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:19.009397 systemd[1]: kubelet.service: Consumed 3.385s CPU time, 129M memory peak. Oct 29 23:31:19.015819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 29 23:31:19.380386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 29 23:31:19.395202 (kubelet)[3545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 29 23:31:19.517629 kubelet[3545]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:31:19.517629 kubelet[3545]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 23:31:19.517629 kubelet[3545]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 23:31:19.518171 kubelet[3545]: I1029 23:31:19.517761 3545 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 23:31:19.536515 kubelet[3545]: I1029 23:31:19.535191 3545 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 23:31:19.536515 kubelet[3545]: I1029 23:31:19.535238 3545 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 23:31:19.536515 kubelet[3545]: I1029 23:31:19.535790 3545 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 23:31:19.538507 kubelet[3545]: I1029 23:31:19.538456 3545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 29 23:31:19.548212 kubelet[3545]: I1029 23:31:19.548144 3545 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 23:31:19.569473 kubelet[3545]: I1029 23:31:19.568501 3545 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 29 23:31:19.577341 kubelet[3545]: I1029 23:31:19.577287 3545 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 23:31:19.578665 kubelet[3545]: I1029 23:31:19.577858 3545 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 23:31:19.578665 kubelet[3545]: I1029 23:31:19.577913 3545 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 23:31:19.578665 kubelet[3545]: I1029 23:31:19.578211 3545 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 23:31:19.578665 kubelet[3545]: I1029 23:31:19.578246 3545 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 23:31:19.579075 kubelet[3545]: I1029 23:31:19.578340 3545 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:31:19.579075 kubelet[3545]: I1029 23:31:19.578593 3545 kubelet.go:446] "Attempting to sync node with API server" Oct 29 23:31:19.579075 kubelet[3545]: I1029 23:31:19.578620 3545 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 23:31:19.579866 kubelet[3545]: I1029 23:31:19.579756 3545 kubelet.go:352] "Adding apiserver pod source" Oct 29 23:31:19.579866 kubelet[3545]: I1029 23:31:19.579790 3545 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 23:31:19.585323 kubelet[3545]: I1029 23:31:19.583800 3545 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 29 23:31:19.585323 kubelet[3545]: I1029 23:31:19.584569 3545 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 23:31:19.586485 kubelet[3545]: I1029 23:31:19.586408 3545 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 23:31:19.586485 kubelet[3545]: I1029 23:31:19.586478 3545 server.go:1287] "Started kubelet" Oct 29 23:31:19.599285 kubelet[3545]: I1029 23:31:19.599237 3545 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 23:31:19.610738 kubelet[3545]: I1029 23:31:19.609854 3545 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 23:31:19.613711 kubelet[3545]: I1029 23:31:19.613227 3545 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 23:31:19.614698 kubelet[3545]: I1029 23:31:19.614446 3545 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 23:31:19.618351 kubelet[3545]: I1029 23:31:19.618265 3545 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 23:31:19.619632 kubelet[3545]: E1029 23:31:19.618898 3545 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-28\" not found" Oct 29 23:31:19.622973 kubelet[3545]: I1029 23:31:19.622094 3545 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 23:31:19.622973 kubelet[3545]: I1029 23:31:19.622336 3545 reconciler.go:26] "Reconciler: start to sync state" Oct 29 23:31:19.625592 kubelet[3545]: I1029 23:31:19.624808 3545 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 23:31:19.625925 kubelet[3545]: I1029 23:31:19.625889 3545 server.go:479] "Adding debug handlers to kubelet server" Oct 29 23:31:19.659052 kubelet[3545]: I1029 23:31:19.658920 3545 factory.go:221] Registration of the systemd container factory successfully Oct 29 23:31:19.659397 kubelet[3545]: I1029 23:31:19.659364 3545 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 23:31:19.685430 kubelet[3545]: I1029 23:31:19.685377 3545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 23:31:19.692281 kubelet[3545]: I1029 23:31:19.692240 3545 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 23:31:19.694452 kubelet[3545]: I1029 23:31:19.694417 3545 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 23:31:19.695013 kubelet[3545]: I1029 23:31:19.694911 3545 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 23:31:19.695903 kubelet[3545]: I1029 23:31:19.695209 3545 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 23:31:19.695903 kubelet[3545]: E1029 23:31:19.695310 3545 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 23:31:19.721611 kubelet[3545]: E1029 23:31:19.721563 3545 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-28\" not found" Oct 29 23:31:19.722901 kubelet[3545]: I1029 23:31:19.721805 3545 factory.go:221] Registration of the containerd container factory successfully Oct 29 23:31:19.795944 kubelet[3545]: E1029 23:31:19.795885 3545 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 23:31:19.860578 kubelet[3545]: I1029 23:31:19.860531 3545 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 23:31:19.860578 kubelet[3545]: I1029 23:31:19.860563 3545 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 23:31:19.860808 kubelet[3545]: I1029 23:31:19.860756 3545 state_mem.go:36] "Initialized new in-memory state store" Oct 29 23:31:19.861068 kubelet[3545]: I1029 23:31:19.861030 3545 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 23:31:19.861129 kubelet[3545]: I1029 23:31:19.861062 3545 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 23:31:19.861129 kubelet[3545]: I1029 23:31:19.861100 3545 policy_none.go:49] "None policy: Start" Oct 29 23:31:19.861129 kubelet[3545]: I1029 23:31:19.861119 3545 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 23:31:19.861288 kubelet[3545]: I1029 23:31:19.861139 3545 state_mem.go:35] "Initializing new in-memory state store" Oct 29 23:31:19.861714 kubelet[3545]: I1029 23:31:19.861559 3545 state_mem.go:75] "Updated machine memory state" Oct 29 23:31:19.877386 kubelet[3545]: I1029 23:31:19.877219 3545 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 23:31:19.880584 kubelet[3545]: I1029 23:31:19.878935 3545 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 23:31:19.880584 kubelet[3545]: I1029 23:31:19.878993 3545 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 23:31:19.880990 kubelet[3545]: I1029 23:31:19.880833 3545 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 23:31:19.887469 kubelet[3545]: E1029 23:31:19.886307 3545 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 23:31:19.997609 kubelet[3545]: I1029 23:31:19.997415 3545 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:19.997953 kubelet[3545]: I1029 23:31:19.997435 3545 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-28" Oct 29 23:31:19.998125 kubelet[3545]: I1029 23:31:19.997746 3545 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:20.015822 kubelet[3545]: I1029 23:31:20.014995 3545 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-28" Oct 29 23:31:20.042880 kubelet[3545]: I1029 23:31:20.042824 3545 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-28" Oct 29 23:31:20.043055 kubelet[3545]: I1029 23:31:20.042990 3545 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-28" Oct 29 23:31:20.065556 kubelet[3545]: I1029 23:31:20.065254 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db04d55b3b53fdc31f94876995c5e3da-ca-certs\") pod \"kube-apiserver-ip-172-31-30-28\" (UID: \"db04d55b3b53fdc31f94876995c5e3da\") " pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:20.065556 kubelet[3545]: I1029 23:31:20.065319 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db04d55b3b53fdc31f94876995c5e3da-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-28\" (UID: \"db04d55b3b53fdc31f94876995c5e3da\") " pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:20.065556 kubelet[3545]: I1029 23:31:20.065360 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db04d55b3b53fdc31f94876995c5e3da-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-28\" (UID: \"db04d55b3b53fdc31f94876995c5e3da\") " pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:20.065556 kubelet[3545]: I1029 23:31:20.065409 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:20.065556 kubelet[3545]: I1029 23:31:20.065477 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:20.066515 kubelet[3545]: I1029 23:31:20.066470 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:20.067681 kubelet[3545]: I1029 23:31:20.066980 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:20.068206 kubelet[3545]: I1029 23:31:20.067936 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48542c11e5a723b55b11807b28af7286-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-28\" (UID: \"48542c11e5a723b55b11807b28af7286\") " pod="kube-system/kube-controller-manager-ip-172-31-30-28" Oct 29 23:31:20.068612 kubelet[3545]: I1029 23:31:20.068559 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02ce712ed81d61aac5a226730878bc79-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-28\" (UID: \"02ce712ed81d61aac5a226730878bc79\") " pod="kube-system/kube-scheduler-ip-172-31-30-28" Oct 29 23:31:20.598681 kubelet[3545]: I1029 23:31:20.598605 3545 apiserver.go:52] "Watching apiserver" Oct 29 23:31:20.622507 kubelet[3545]: I1029 23:31:20.622425 3545 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 23:31:20.798424 kubelet[3545]: I1029 23:31:20.798346 3545 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:20.821092 kubelet[3545]: E1029 23:31:20.820638 3545 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-28\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-28" Oct 29 23:31:20.875221 kubelet[3545]: I1029 23:31:20.874784 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-28" podStartSLOduration=0.87475981 podStartE2EDuration="874.75981ms" podCreationTimestamp="2025-10-29 23:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:31:20.853977658 +0000 UTC m=+1.449115280" watchObservedRunningTime="2025-10-29 23:31:20.87475981 +0000 UTC m=+1.469897432" Oct 29 23:31:20.895776 kubelet[3545]: I1029 23:31:20.895384 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-28" podStartSLOduration=0.895356442 podStartE2EDuration="895.356442ms" podCreationTimestamp="2025-10-29 23:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:31:20.893558242 +0000 UTC m=+1.488695864" watchObservedRunningTime="2025-10-29 23:31:20.895356442 +0000 UTC m=+1.490494052" Oct 29 23:31:20.895776 kubelet[3545]: I1029 23:31:20.895545 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-28" podStartSLOduration=0.895535314 podStartE2EDuration="895.535314ms" podCreationTimestamp="2025-10-29 23:31:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:31:20.876687166 +0000 UTC m=+1.471824776" watchObservedRunningTime="2025-10-29 23:31:20.895535314 +0000 UTC m=+1.490673092" Oct 29 23:31:22.818621 kubelet[3545]: I1029 23:31:22.818153 3545 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 23:31:22.820898 kubelet[3545]: I1029 23:31:22.820672 3545 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 23:31:22.820986 containerd[2015]: time="2025-10-29T23:31:22.818896644Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 23:31:23.841926 systemd[1]: Created slice kubepods-besteffort-podf453c36a_e41f_45c7_bd7a_a8e7a7a5c012.slice - libcontainer container kubepods-besteffort-podf453c36a_e41f_45c7_bd7a_a8e7a7a5c012.slice. Oct 29 23:31:23.893919 kubelet[3545]: I1029 23:31:23.893868 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f453c36a-e41f-45c7-bd7a-a8e7a7a5c012-kube-proxy\") pod \"kube-proxy-jqz5p\" (UID: \"f453c36a-e41f-45c7-bd7a-a8e7a7a5c012\") " pod="kube-system/kube-proxy-jqz5p" Oct 29 23:31:23.896169 kubelet[3545]: I1029 23:31:23.895799 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f453c36a-e41f-45c7-bd7a-a8e7a7a5c012-lib-modules\") pod \"kube-proxy-jqz5p\" (UID: \"f453c36a-e41f-45c7-bd7a-a8e7a7a5c012\") " pod="kube-system/kube-proxy-jqz5p" Oct 29 23:31:23.896169 kubelet[3545]: I1029 23:31:23.895934 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2l8z\" (UniqueName: \"kubernetes.io/projected/f453c36a-e41f-45c7-bd7a-a8e7a7a5c012-kube-api-access-f2l8z\") pod \"kube-proxy-jqz5p\" (UID: \"f453c36a-e41f-45c7-bd7a-a8e7a7a5c012\") " pod="kube-system/kube-proxy-jqz5p" Oct 29 23:31:23.896169 kubelet[3545]: I1029 23:31:23.896118 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f453c36a-e41f-45c7-bd7a-a8e7a7a5c012-xtables-lock\") pod \"kube-proxy-jqz5p\" (UID: \"f453c36a-e41f-45c7-bd7a-a8e7a7a5c012\") " pod="kube-system/kube-proxy-jqz5p" Oct 29 23:31:23.973394 systemd[1]: Created slice kubepods-besteffort-pod95b33ac5_c949_43f5_960d_a32d02e0a5fc.slice - libcontainer container kubepods-besteffort-pod95b33ac5_c949_43f5_960d_a32d02e0a5fc.slice. Oct 29 23:31:24.098526 kubelet[3545]: I1029 23:31:24.098362 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95b33ac5-c949-43f5-960d-a32d02e0a5fc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2t2qj\" (UID: \"95b33ac5-c949-43f5-960d-a32d02e0a5fc\") " pod="tigera-operator/tigera-operator-7dcd859c48-2t2qj" Oct 29 23:31:24.098526 kubelet[3545]: I1029 23:31:24.098433 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-455rx\" (UniqueName: \"kubernetes.io/projected/95b33ac5-c949-43f5-960d-a32d02e0a5fc-kube-api-access-455rx\") pod \"tigera-operator-7dcd859c48-2t2qj\" (UID: \"95b33ac5-c949-43f5-960d-a32d02e0a5fc\") " pod="tigera-operator/tigera-operator-7dcd859c48-2t2qj" Oct 29 23:31:24.162113 containerd[2015]: time="2025-10-29T23:31:24.162043054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqz5p,Uid:f453c36a-e41f-45c7-bd7a-a8e7a7a5c012,Namespace:kube-system,Attempt:0,}" Oct 29 23:31:24.199781 containerd[2015]: time="2025-10-29T23:31:24.199633763Z" level=info msg="connecting to shim ecf156c08124af875982c0f7c2276c56126fe9bb2c412be2a8253e6102342bc0" address="unix:///run/containerd/s/44946ee13cea338f98f80ec5c5330807e3bc07f72717a0bfafdbaba6c02afde2" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:24.273064 systemd[1]: Started cri-containerd-ecf156c08124af875982c0f7c2276c56126fe9bb2c412be2a8253e6102342bc0.scope - libcontainer container ecf156c08124af875982c0f7c2276c56126fe9bb2c412be2a8253e6102342bc0. Oct 29 23:31:24.284230 containerd[2015]: time="2025-10-29T23:31:24.284140235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2t2qj,Uid:95b33ac5-c949-43f5-960d-a32d02e0a5fc,Namespace:tigera-operator,Attempt:0,}" Oct 29 23:31:24.325540 containerd[2015]: time="2025-10-29T23:31:24.325452695Z" level=info msg="connecting to shim 7430417936ea96d8fa598aa08917ab2fa63932f22ea5445f3dbb52dfaf73be4a" address="unix:///run/containerd/s/2a6f479d788c9c0ff8dec5ae0f111ef51f9f12e6361804d3ad59442219c676db" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:24.397629 containerd[2015]: time="2025-10-29T23:31:24.397483872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqz5p,Uid:f453c36a-e41f-45c7-bd7a-a8e7a7a5c012,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecf156c08124af875982c0f7c2276c56126fe9bb2c412be2a8253e6102342bc0\"" Oct 29 23:31:24.406042 systemd[1]: Started cri-containerd-7430417936ea96d8fa598aa08917ab2fa63932f22ea5445f3dbb52dfaf73be4a.scope - libcontainer container 7430417936ea96d8fa598aa08917ab2fa63932f22ea5445f3dbb52dfaf73be4a. Oct 29 23:31:24.411700 containerd[2015]: time="2025-10-29T23:31:24.411576960Z" level=info msg="CreateContainer within sandbox \"ecf156c08124af875982c0f7c2276c56126fe9bb2c412be2a8253e6102342bc0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 23:31:24.436807 containerd[2015]: time="2025-10-29T23:31:24.436748844Z" level=info msg="Container ceccf45712c30a6162b90d115db161d82f5e974e3b4ed1aaaeca2cfe336c59ef: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:24.450274 containerd[2015]: time="2025-10-29T23:31:24.450187416Z" level=info msg="CreateContainer within sandbox \"ecf156c08124af875982c0f7c2276c56126fe9bb2c412be2a8253e6102342bc0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ceccf45712c30a6162b90d115db161d82f5e974e3b4ed1aaaeca2cfe336c59ef\"" Oct 29 23:31:24.452153 containerd[2015]: time="2025-10-29T23:31:24.452087052Z" level=info msg="StartContainer for \"ceccf45712c30a6162b90d115db161d82f5e974e3b4ed1aaaeca2cfe336c59ef\"" Oct 29 23:31:24.457598 containerd[2015]: time="2025-10-29T23:31:24.457522560Z" level=info msg="connecting to shim ceccf45712c30a6162b90d115db161d82f5e974e3b4ed1aaaeca2cfe336c59ef" address="unix:///run/containerd/s/44946ee13cea338f98f80ec5c5330807e3bc07f72717a0bfafdbaba6c02afde2" protocol=ttrpc version=3 Oct 29 23:31:24.505995 systemd[1]: Started cri-containerd-ceccf45712c30a6162b90d115db161d82f5e974e3b4ed1aaaeca2cfe336c59ef.scope - libcontainer container ceccf45712c30a6162b90d115db161d82f5e974e3b4ed1aaaeca2cfe336c59ef. Oct 29 23:31:24.519223 containerd[2015]: time="2025-10-29T23:31:24.518993064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2t2qj,Uid:95b33ac5-c949-43f5-960d-a32d02e0a5fc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7430417936ea96d8fa598aa08917ab2fa63932f22ea5445f3dbb52dfaf73be4a\"" Oct 29 23:31:24.526684 containerd[2015]: time="2025-10-29T23:31:24.525995484Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 23:31:24.599848 containerd[2015]: time="2025-10-29T23:31:24.599691817Z" level=info msg="StartContainer for \"ceccf45712c30a6162b90d115db161d82f5e974e3b4ed1aaaeca2cfe336c59ef\" returns successfully" Oct 29 23:31:26.277809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4235742265.mount: Deactivated successfully. Oct 29 23:31:27.126632 containerd[2015]: time="2025-10-29T23:31:27.126070561Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:27.127892 containerd[2015]: time="2025-10-29T23:31:27.127830229Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Oct 29 23:31:27.128772 containerd[2015]: time="2025-10-29T23:31:27.128711881Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:27.133484 containerd[2015]: time="2025-10-29T23:31:27.133419397Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:27.136159 containerd[2015]: time="2025-10-29T23:31:27.135974473Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.609915449s" Oct 29 23:31:27.136159 containerd[2015]: time="2025-10-29T23:31:27.136026733Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Oct 29 23:31:27.142032 containerd[2015]: time="2025-10-29T23:31:27.141928453Z" level=info msg="CreateContainer within sandbox \"7430417936ea96d8fa598aa08917ab2fa63932f22ea5445f3dbb52dfaf73be4a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 23:31:27.157885 containerd[2015]: time="2025-10-29T23:31:27.157323049Z" level=info msg="Container d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:27.165414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2209032956.mount: Deactivated successfully. Oct 29 23:31:27.174881 containerd[2015]: time="2025-10-29T23:31:27.174788737Z" level=info msg="CreateContainer within sandbox \"7430417936ea96d8fa598aa08917ab2fa63932f22ea5445f3dbb52dfaf73be4a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44\"" Oct 29 23:31:27.177579 containerd[2015]: time="2025-10-29T23:31:27.177092881Z" level=info msg="StartContainer for \"d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44\"" Oct 29 23:31:27.179726 containerd[2015]: time="2025-10-29T23:31:27.179611741Z" level=info msg="connecting to shim d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44" address="unix:///run/containerd/s/2a6f479d788c9c0ff8dec5ae0f111ef51f9f12e6361804d3ad59442219c676db" protocol=ttrpc version=3 Oct 29 23:31:27.220957 systemd[1]: Started cri-containerd-d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44.scope - libcontainer container d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44. Oct 29 23:31:27.277000 containerd[2015]: time="2025-10-29T23:31:27.276941426Z" level=info msg="StartContainer for \"d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44\" returns successfully" Oct 29 23:31:27.852002 kubelet[3545]: I1029 23:31:27.851782 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jqz5p" podStartSLOduration=4.851565917 podStartE2EDuration="4.851565917s" podCreationTimestamp="2025-10-29 23:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:31:24.856235258 +0000 UTC m=+5.451372856" watchObservedRunningTime="2025-10-29 23:31:27.851565917 +0000 UTC m=+8.446703515" Oct 29 23:31:27.853902 kubelet[3545]: I1029 23:31:27.853454 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2t2qj" podStartSLOduration=2.239314572 podStartE2EDuration="4.853426805s" podCreationTimestamp="2025-10-29 23:31:23 +0000 UTC" firstStartedPulling="2025-10-29 23:31:24.523642932 +0000 UTC m=+5.118780530" lastFinishedPulling="2025-10-29 23:31:27.137755177 +0000 UTC m=+7.732892763" observedRunningTime="2025-10-29 23:31:27.850523417 +0000 UTC m=+8.445661015" watchObservedRunningTime="2025-10-29 23:31:27.853426805 +0000 UTC m=+8.448564415" Oct 29 23:31:36.065890 sudo[2402]: pam_unix(sudo:session): session closed for user root Oct 29 23:31:36.089316 sshd[2401]: Connection closed by 139.178.89.65 port 45114 Oct 29 23:31:36.089884 sshd-session[2398]: pam_unix(sshd:session): session closed for user core Oct 29 23:31:36.100323 systemd[1]: sshd@8-172.31.30.28:22-139.178.89.65:45114.service: Deactivated successfully. Oct 29 23:31:36.105518 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 23:31:36.108782 systemd[1]: session-9.scope: Consumed 11.227s CPU time, 223.6M memory peak. Oct 29 23:31:36.119539 systemd-logind[1985]: Session 9 logged out. Waiting for processes to exit. Oct 29 23:31:36.122937 systemd-logind[1985]: Removed session 9. Oct 29 23:31:55.467277 systemd[1]: Created slice kubepods-besteffort-pod41a04211_fee2_49b9_94af_1d8cf24fb26a.slice - libcontainer container kubepods-besteffort-pod41a04211_fee2_49b9_94af_1d8cf24fb26a.slice. Oct 29 23:31:55.513005 kubelet[3545]: I1029 23:31:55.512928 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6px4w\" (UniqueName: \"kubernetes.io/projected/41a04211-fee2-49b9-94af-1d8cf24fb26a-kube-api-access-6px4w\") pod \"calico-typha-5b6f8b58c9-pnrz5\" (UID: \"41a04211-fee2-49b9-94af-1d8cf24fb26a\") " pod="calico-system/calico-typha-5b6f8b58c9-pnrz5" Oct 29 23:31:55.513005 kubelet[3545]: I1029 23:31:55.513010 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41a04211-fee2-49b9-94af-1d8cf24fb26a-tigera-ca-bundle\") pod \"calico-typha-5b6f8b58c9-pnrz5\" (UID: \"41a04211-fee2-49b9-94af-1d8cf24fb26a\") " pod="calico-system/calico-typha-5b6f8b58c9-pnrz5" Oct 29 23:31:55.513668 kubelet[3545]: I1029 23:31:55.513050 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41a04211-fee2-49b9-94af-1d8cf24fb26a-typha-certs\") pod \"calico-typha-5b6f8b58c9-pnrz5\" (UID: \"41a04211-fee2-49b9-94af-1d8cf24fb26a\") " pod="calico-system/calico-typha-5b6f8b58c9-pnrz5" Oct 29 23:31:55.651361 systemd[1]: Created slice kubepods-besteffort-pod8c29a2f9_a313_4a09_a89d_cc2ececf2c09.slice - libcontainer container kubepods-besteffort-pod8c29a2f9_a313_4a09_a89d_cc2ececf2c09.slice. Oct 29 23:31:55.714661 kubelet[3545]: I1029 23:31:55.714063 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-node-certs\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.714820 kubelet[3545]: I1029 23:31:55.714696 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-xtables-lock\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.714820 kubelet[3545]: I1029 23:31:55.714743 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-lib-modules\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.714820 kubelet[3545]: I1029 23:31:55.714782 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbdsh\" (UniqueName: \"kubernetes.io/projected/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-kube-api-access-dbdsh\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.714995 kubelet[3545]: I1029 23:31:55.714825 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-cni-log-dir\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.714995 kubelet[3545]: I1029 23:31:55.714864 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-policysync\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.714995 kubelet[3545]: I1029 23:31:55.714898 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-tigera-ca-bundle\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.714995 kubelet[3545]: I1029 23:31:55.714937 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-flexvol-driver-host\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.714995 kubelet[3545]: I1029 23:31:55.714976 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-cni-bin-dir\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.715235 kubelet[3545]: I1029 23:31:55.715012 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-var-run-calico\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.715235 kubelet[3545]: I1029 23:31:55.715046 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-var-lib-calico\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.715235 kubelet[3545]: I1029 23:31:55.715082 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c29a2f9-a313-4a09-a89d-cc2ececf2c09-cni-net-dir\") pod \"calico-node-r6f46\" (UID: \"8c29a2f9-a313-4a09-a89d-cc2ececf2c09\") " pod="calico-system/calico-node-r6f46" Oct 29 23:31:55.764274 kubelet[3545]: E1029 23:31:55.764065 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:31:55.783222 containerd[2015]: time="2025-10-29T23:31:55.783153703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6f8b58c9-pnrz5,Uid:41a04211-fee2-49b9-94af-1d8cf24fb26a,Namespace:calico-system,Attempt:0,}" Oct 29 23:31:55.821695 kubelet[3545]: I1029 23:31:55.816031 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a3348575-d754-476b-94b5-28b2df5efe85-socket-dir\") pod \"csi-node-driver-wm5cb\" (UID: \"a3348575-d754-476b-94b5-28b2df5efe85\") " pod="calico-system/csi-node-driver-wm5cb" Oct 29 23:31:55.821695 kubelet[3545]: I1029 23:31:55.816149 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl4cl\" (UniqueName: \"kubernetes.io/projected/a3348575-d754-476b-94b5-28b2df5efe85-kube-api-access-xl4cl\") pod \"csi-node-driver-wm5cb\" (UID: \"a3348575-d754-476b-94b5-28b2df5efe85\") " pod="calico-system/csi-node-driver-wm5cb" Oct 29 23:31:55.821695 kubelet[3545]: I1029 23:31:55.816267 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a3348575-d754-476b-94b5-28b2df5efe85-registration-dir\") pod \"csi-node-driver-wm5cb\" (UID: \"a3348575-d754-476b-94b5-28b2df5efe85\") " pod="calico-system/csi-node-driver-wm5cb" Oct 29 23:31:55.821695 kubelet[3545]: I1029 23:31:55.816309 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3348575-d754-476b-94b5-28b2df5efe85-kubelet-dir\") pod \"csi-node-driver-wm5cb\" (UID: \"a3348575-d754-476b-94b5-28b2df5efe85\") " pod="calico-system/csi-node-driver-wm5cb" Oct 29 23:31:55.821695 kubelet[3545]: I1029 23:31:55.816344 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a3348575-d754-476b-94b5-28b2df5efe85-varrun\") pod \"csi-node-driver-wm5cb\" (UID: \"a3348575-d754-476b-94b5-28b2df5efe85\") " pod="calico-system/csi-node-driver-wm5cb" Oct 29 23:31:55.839121 kubelet[3545]: E1029 23:31:55.837901 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.839121 kubelet[3545]: W1029 23:31:55.837940 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.839121 kubelet[3545]: E1029 23:31:55.837994 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.839908 kubelet[3545]: E1029 23:31:55.839772 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.840677 kubelet[3545]: W1029 23:31:55.840136 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.841160 kubelet[3545]: E1029 23:31:55.841127 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.846175 kubelet[3545]: E1029 23:31:55.843035 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.846175 kubelet[3545]: W1029 23:31:55.844485 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.846175 kubelet[3545]: E1029 23:31:55.844706 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.849637 kubelet[3545]: E1029 23:31:55.848009 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.852552 kubelet[3545]: W1029 23:31:55.849634 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.856324 kubelet[3545]: E1029 23:31:55.856270 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.856324 kubelet[3545]: W1029 23:31:55.856310 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.857034 containerd[2015]: time="2025-10-29T23:31:55.856840916Z" level=info msg="connecting to shim d640dfcfd5aefffacd5ffbefdd63945761fb772bed038ddf3e94c84daaf57eea" address="unix:///run/containerd/s/d8155b7e83b0c07296f1df8c56a6b3e40b3c639c39f020ac77c9a704e87342df" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:55.860362 kubelet[3545]: E1029 23:31:55.860079 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.862188 kubelet[3545]: E1029 23:31:55.861006 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.862188 kubelet[3545]: W1029 23:31:55.861041 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.862961 kubelet[3545]: E1029 23:31:55.860583 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.863822 kubelet[3545]: E1029 23:31:55.863496 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.864679 kubelet[3545]: E1029 23:31:55.864242 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.865219 kubelet[3545]: W1029 23:31:55.865177 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.865844 kubelet[3545]: E1029 23:31:55.865792 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.871365 kubelet[3545]: E1029 23:31:55.867249 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.871365 kubelet[3545]: W1029 23:31:55.870557 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.871365 kubelet[3545]: E1029 23:31:55.870911 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.873687 kubelet[3545]: E1029 23:31:55.872136 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.873687 kubelet[3545]: W1029 23:31:55.872176 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.873687 kubelet[3545]: E1029 23:31:55.872550 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.882683 kubelet[3545]: E1029 23:31:55.875001 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.882683 kubelet[3545]: W1029 23:31:55.875154 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.882683 kubelet[3545]: E1029 23:31:55.875420 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.882683 kubelet[3545]: E1029 23:31:55.876434 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.882683 kubelet[3545]: W1029 23:31:55.876458 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.882683 kubelet[3545]: E1029 23:31:55.876770 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.882683 kubelet[3545]: E1029 23:31:55.877737 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.882683 kubelet[3545]: W1029 23:31:55.878019 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.882683 kubelet[3545]: E1029 23:31:55.878100 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.882683 kubelet[3545]: E1029 23:31:55.879169 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.883238 kubelet[3545]: W1029 23:31:55.879192 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.883238 kubelet[3545]: E1029 23:31:55.879256 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.883238 kubelet[3545]: E1029 23:31:55.880616 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.883238 kubelet[3545]: W1029 23:31:55.880758 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.883238 kubelet[3545]: E1029 23:31:55.880908 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.883238 kubelet[3545]: E1029 23:31:55.882228 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.883238 kubelet[3545]: W1029 23:31:55.882369 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.883238 kubelet[3545]: E1029 23:31:55.882414 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.883612 kubelet[3545]: E1029 23:31:55.883592 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.883700 kubelet[3545]: W1029 23:31:55.883617 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.885589 kubelet[3545]: E1029 23:31:55.883644 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.887832 kubelet[3545]: E1029 23:31:55.887710 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.887832 kubelet[3545]: W1029 23:31:55.887744 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.887832 kubelet[3545]: E1029 23:31:55.887777 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.919980 kubelet[3545]: E1029 23:31:55.919907 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.920522 kubelet[3545]: W1029 23:31:55.920489 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.920819 kubelet[3545]: E1029 23:31:55.920628 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.925078 kubelet[3545]: E1029 23:31:55.923953 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.925078 kubelet[3545]: W1029 23:31:55.923986 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.925078 kubelet[3545]: E1029 23:31:55.924037 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.926157 kubelet[3545]: E1029 23:31:55.925774 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.926539 kubelet[3545]: W1029 23:31:55.926333 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.927563 kubelet[3545]: E1029 23:31:55.926726 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.930686 kubelet[3545]: E1029 23:31:55.930132 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.930894 kubelet[3545]: W1029 23:31:55.930862 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.931690 kubelet[3545]: E1029 23:31:55.931145 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.932770 kubelet[3545]: E1029 23:31:55.932732 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.933354 kubelet[3545]: W1029 23:31:55.933316 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.933513 kubelet[3545]: E1029 23:31:55.933490 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.937093 kubelet[3545]: E1029 23:31:55.935400 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.937093 kubelet[3545]: W1029 23:31:55.936026 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.937093 kubelet[3545]: E1029 23:31:55.936068 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.939992 kubelet[3545]: E1029 23:31:55.939951 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.940531 kubelet[3545]: W1029 23:31:55.940147 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.940531 kubelet[3545]: E1029 23:31:55.940190 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.941211 kubelet[3545]: E1029 23:31:55.941179 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.942004 systemd[1]: Started cri-containerd-d640dfcfd5aefffacd5ffbefdd63945761fb772bed038ddf3e94c84daaf57eea.scope - libcontainer container d640dfcfd5aefffacd5ffbefdd63945761fb772bed038ddf3e94c84daaf57eea. Oct 29 23:31:55.942528 kubelet[3545]: W1029 23:31:55.942062 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.942528 kubelet[3545]: E1029 23:31:55.942119 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.946292 kubelet[3545]: E1029 23:31:55.946031 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.948344 kubelet[3545]: W1029 23:31:55.947749 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.948344 kubelet[3545]: E1029 23:31:55.947930 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.950554 kubelet[3545]: E1029 23:31:55.950515 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.952336 kubelet[3545]: W1029 23:31:55.952255 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.954495 kubelet[3545]: E1029 23:31:55.954459 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.959675 kubelet[3545]: E1029 23:31:55.959388 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.964882 kubelet[3545]: W1029 23:31:55.964818 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.965673 kubelet[3545]: E1029 23:31:55.965604 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.966091 kubelet[3545]: W1029 23:31:55.965636 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.968242 kubelet[3545]: E1029 23:31:55.968165 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.975015 kubelet[3545]: E1029 23:31:55.968473 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.975864 kubelet[3545]: E1029 23:31:55.970644 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.977437 kubelet[3545]: W1029 23:31:55.977249 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.979116 kubelet[3545]: E1029 23:31:55.978209 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.982544 kubelet[3545]: E1029 23:31:55.982507 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.983001 kubelet[3545]: W1029 23:31:55.982970 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.983771 kubelet[3545]: E1029 23:31:55.983237 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.985194 kubelet[3545]: E1029 23:31:55.985156 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.985527 kubelet[3545]: W1029 23:31:55.985501 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.986230 kubelet[3545]: E1029 23:31:55.986197 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.986584 kubelet[3545]: E1029 23:31:55.986408 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.986880 kubelet[3545]: W1029 23:31:55.986557 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.986880 kubelet[3545]: E1029 23:31:55.986831 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.988493 kubelet[3545]: E1029 23:31:55.988415 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.989190 kubelet[3545]: W1029 23:31:55.988724 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.990986 kubelet[3545]: E1029 23:31:55.990925 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.991554 kubelet[3545]: E1029 23:31:55.991096 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.991554 kubelet[3545]: W1029 23:31:55.991268 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.992095 kubelet[3545]: E1029 23:31:55.991802 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.992953 kubelet[3545]: E1029 23:31:55.992892 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.993225 kubelet[3545]: W1029 23:31:55.992921 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.993800 kubelet[3545]: E1029 23:31:55.993624 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.995026 kubelet[3545]: E1029 23:31:55.994797 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.995455 kubelet[3545]: W1029 23:31:55.995290 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.995944 containerd[2015]: time="2025-10-29T23:31:55.995713928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r6f46,Uid:8c29a2f9-a313-4a09-a89d-cc2ececf2c09,Namespace:calico-system,Attempt:0,}" Oct 29 23:31:55.996374 kubelet[3545]: E1029 23:31:55.995919 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:55.998301 kubelet[3545]: E1029 23:31:55.998226 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:55.998918 kubelet[3545]: W1029 23:31:55.998262 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:55.998918 kubelet[3545]: E1029 23:31:55.998821 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:56.001440 kubelet[3545]: E1029 23:31:56.001389 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:56.001800 kubelet[3545]: W1029 23:31:56.001724 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:56.003965 kubelet[3545]: E1029 23:31:56.003900 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:56.006444 kubelet[3545]: E1029 23:31:56.004638 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:56.007364 kubelet[3545]: W1029 23:31:56.007231 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:56.007725 kubelet[3545]: E1029 23:31:56.007329 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:56.010992 kubelet[3545]: E1029 23:31:56.010563 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:56.010992 kubelet[3545]: W1029 23:31:56.010599 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:56.011722 kubelet[3545]: E1029 23:31:56.010697 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:56.012561 kubelet[3545]: E1029 23:31:56.012312 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:56.012561 kubelet[3545]: W1029 23:31:56.012497 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:56.018244 kubelet[3545]: E1029 23:31:56.013686 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:56.018244 kubelet[3545]: E1029 23:31:56.013559 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:56.018244 kubelet[3545]: W1029 23:31:56.013790 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:56.018244 kubelet[3545]: E1029 23:31:56.013816 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:56.031271 kubelet[3545]: E1029 23:31:56.031193 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:56.031880 kubelet[3545]: W1029 23:31:56.031846 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:56.032166 kubelet[3545]: E1029 23:31:56.032126 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:56.059851 containerd[2015]: time="2025-10-29T23:31:56.059596793Z" level=info msg="connecting to shim f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d" address="unix:///run/containerd/s/66ec31aea3a22ced3d49c87db331acb8913e8718ff97ca6498b21a93e67782b1" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:31:56.122992 systemd[1]: Started cri-containerd-f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d.scope - libcontainer container f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d. Oct 29 23:31:56.123664 containerd[2015]: time="2025-10-29T23:31:56.123569705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6f8b58c9-pnrz5,Uid:41a04211-fee2-49b9-94af-1d8cf24fb26a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d640dfcfd5aefffacd5ffbefdd63945761fb772bed038ddf3e94c84daaf57eea\"" Oct 29 23:31:56.129898 containerd[2015]: time="2025-10-29T23:31:56.129608957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 23:31:56.183599 containerd[2015]: time="2025-10-29T23:31:56.183525677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r6f46,Uid:8c29a2f9-a313-4a09-a89d-cc2ececf2c09,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d\"" Oct 29 23:31:57.265890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884481105.mount: Deactivated successfully. Oct 29 23:31:57.703908 kubelet[3545]: E1029 23:31:57.703861 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:31:58.096180 containerd[2015]: time="2025-10-29T23:31:58.094966435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:58.097052 containerd[2015]: time="2025-10-29T23:31:58.097009543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Oct 29 23:31:58.098510 containerd[2015]: time="2025-10-29T23:31:58.098428207Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:58.103948 containerd[2015]: time="2025-10-29T23:31:58.103863487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:58.105429 containerd[2015]: time="2025-10-29T23:31:58.104959807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.974746122s" Oct 29 23:31:58.105429 containerd[2015]: time="2025-10-29T23:31:58.105013471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Oct 29 23:31:58.109885 containerd[2015]: time="2025-10-29T23:31:58.108807151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 23:31:58.135517 containerd[2015]: time="2025-10-29T23:31:58.135468607Z" level=info msg="CreateContainer within sandbox \"d640dfcfd5aefffacd5ffbefdd63945761fb772bed038ddf3e94c84daaf57eea\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 23:31:58.148686 containerd[2015]: time="2025-10-29T23:31:58.146733943Z" level=info msg="Container b8e37296ccb5b9d92c7d845a6bfc02873ceddf8885e0364ff26f830da1ebbc81: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:58.158271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099108129.mount: Deactivated successfully. Oct 29 23:31:58.166688 containerd[2015]: time="2025-10-29T23:31:58.166620271Z" level=info msg="CreateContainer within sandbox \"d640dfcfd5aefffacd5ffbefdd63945761fb772bed038ddf3e94c84daaf57eea\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b8e37296ccb5b9d92c7d845a6bfc02873ceddf8885e0364ff26f830da1ebbc81\"" Oct 29 23:31:58.169218 containerd[2015]: time="2025-10-29T23:31:58.168884659Z" level=info msg="StartContainer for \"b8e37296ccb5b9d92c7d845a6bfc02873ceddf8885e0364ff26f830da1ebbc81\"" Oct 29 23:31:58.173884 containerd[2015]: time="2025-10-29T23:31:58.173697583Z" level=info msg="connecting to shim b8e37296ccb5b9d92c7d845a6bfc02873ceddf8885e0364ff26f830da1ebbc81" address="unix:///run/containerd/s/d8155b7e83b0c07296f1df8c56a6b3e40b3c639c39f020ac77c9a704e87342df" protocol=ttrpc version=3 Oct 29 23:31:58.218082 systemd[1]: Started cri-containerd-b8e37296ccb5b9d92c7d845a6bfc02873ceddf8885e0364ff26f830da1ebbc81.scope - libcontainer container b8e37296ccb5b9d92c7d845a6bfc02873ceddf8885e0364ff26f830da1ebbc81. Oct 29 23:31:58.300500 containerd[2015]: time="2025-10-29T23:31:58.300414344Z" level=info msg="StartContainer for \"b8e37296ccb5b9d92c7d845a6bfc02873ceddf8885e0364ff26f830da1ebbc81\" returns successfully" Oct 29 23:31:59.015453 kubelet[3545]: I1029 23:31:59.014022 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b6f8b58c9-pnrz5" podStartSLOduration=2.035675789 podStartE2EDuration="4.013999735s" podCreationTimestamp="2025-10-29 23:31:55 +0000 UTC" firstStartedPulling="2025-10-29 23:31:56.128566037 +0000 UTC m=+36.723703635" lastFinishedPulling="2025-10-29 23:31:58.106889983 +0000 UTC m=+38.702027581" observedRunningTime="2025-10-29 23:31:59.013109947 +0000 UTC m=+39.608247569" watchObservedRunningTime="2025-10-29 23:31:59.013999735 +0000 UTC m=+39.609137345" Oct 29 23:31:59.093742 kubelet[3545]: E1029 23:31:59.093698 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.093742 kubelet[3545]: W1029 23:31:59.093736 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.093993 kubelet[3545]: E1029 23:31:59.093792 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.094220 kubelet[3545]: E1029 23:31:59.094192 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.094314 kubelet[3545]: W1029 23:31:59.094220 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.094383 kubelet[3545]: E1029 23:31:59.094319 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.094703 kubelet[3545]: E1029 23:31:59.094675 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.094887 kubelet[3545]: W1029 23:31:59.094701 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.094887 kubelet[3545]: E1029 23:31:59.094723 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.095037 kubelet[3545]: E1029 23:31:59.095007 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.095103 kubelet[3545]: W1029 23:31:59.095052 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.095103 kubelet[3545]: E1029 23:31:59.095074 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.095448 kubelet[3545]: E1029 23:31:59.095422 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.095520 kubelet[3545]: W1029 23:31:59.095447 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.095520 kubelet[3545]: E1029 23:31:59.095468 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.095801 kubelet[3545]: E1029 23:31:59.095774 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.095881 kubelet[3545]: W1029 23:31:59.095800 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.095881 kubelet[3545]: E1029 23:31:59.095827 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.096182 kubelet[3545]: E1029 23:31:59.096115 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.096182 kubelet[3545]: W1029 23:31:59.096140 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.096182 kubelet[3545]: E1029 23:31:59.096161 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.096852 kubelet[3545]: E1029 23:31:59.096821 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.097149 kubelet[3545]: W1029 23:31:59.096851 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.097149 kubelet[3545]: E1029 23:31:59.096878 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.097509 kubelet[3545]: E1029 23:31:59.097476 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.097509 kubelet[3545]: W1029 23:31:59.097504 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.097866 kubelet[3545]: E1029 23:31:59.097529 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.098116 kubelet[3545]: E1029 23:31:59.098086 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.098185 kubelet[3545]: W1029 23:31:59.098115 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.098185 kubelet[3545]: E1029 23:31:59.098142 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.098471 kubelet[3545]: E1029 23:31:59.098443 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.098727 kubelet[3545]: W1029 23:31:59.098469 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.098727 kubelet[3545]: E1029 23:31:59.098490 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.099014 kubelet[3545]: E1029 23:31:59.098985 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.099072 kubelet[3545]: W1029 23:31:59.099014 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.099072 kubelet[3545]: E1029 23:31:59.099039 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.099355 kubelet[3545]: E1029 23:31:59.099329 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.099355 kubelet[3545]: W1029 23:31:59.099354 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.099355 kubelet[3545]: E1029 23:31:59.099375 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.099948 kubelet[3545]: E1029 23:31:59.099920 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.100010 kubelet[3545]: W1029 23:31:59.099951 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.100010 kubelet[3545]: E1029 23:31:59.099975 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.100338 kubelet[3545]: E1029 23:31:59.100245 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.100452 kubelet[3545]: W1029 23:31:59.100373 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.100452 kubelet[3545]: E1029 23:31:59.100401 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.107028 kubelet[3545]: E1029 23:31:59.106903 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.107028 kubelet[3545]: W1029 23:31:59.106931 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.107028 kubelet[3545]: E1029 23:31:59.106960 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.107758 kubelet[3545]: E1029 23:31:59.107734 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.108025 kubelet[3545]: W1029 23:31:59.107892 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.108025 kubelet[3545]: E1029 23:31:59.107943 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.108619 kubelet[3545]: E1029 23:31:59.108595 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.108932 kubelet[3545]: W1029 23:31:59.108733 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.108932 kubelet[3545]: E1029 23:31:59.108779 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.109411 kubelet[3545]: E1029 23:31:59.109297 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.109411 kubelet[3545]: W1029 23:31:59.109319 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.109411 kubelet[3545]: E1029 23:31:59.109368 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.110077 kubelet[3545]: E1029 23:31:59.109996 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.110077 kubelet[3545]: W1029 23:31:59.110019 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.110330 kubelet[3545]: E1029 23:31:59.110206 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.110457 kubelet[3545]: E1029 23:31:59.110430 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.110555 kubelet[3545]: W1029 23:31:59.110460 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.110555 kubelet[3545]: E1029 23:31:59.110494 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.111165 kubelet[3545]: E1029 23:31:59.111004 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.111165 kubelet[3545]: W1029 23:31:59.111029 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.111165 kubelet[3545]: E1029 23:31:59.111062 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.111603 kubelet[3545]: E1029 23:31:59.111582 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.111765 kubelet[3545]: W1029 23:31:59.111741 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.111881 kubelet[3545]: E1029 23:31:59.111859 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.112217 kubelet[3545]: E1029 23:31:59.112190 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.112291 kubelet[3545]: W1029 23:31:59.112216 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.112291 kubelet[3545]: E1029 23:31:59.112249 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.112527 kubelet[3545]: E1029 23:31:59.112502 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.112706 kubelet[3545]: W1029 23:31:59.112527 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.112706 kubelet[3545]: E1029 23:31:59.112623 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.112862 kubelet[3545]: E1029 23:31:59.112835 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.113103 kubelet[3545]: W1029 23:31:59.112861 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.113103 kubelet[3545]: E1029 23:31:59.112911 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.113305 kubelet[3545]: E1029 23:31:59.113280 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.113387 kubelet[3545]: W1029 23:31:59.113303 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.113387 kubelet[3545]: E1029 23:31:59.113336 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.113770 kubelet[3545]: E1029 23:31:59.113753 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.113992 kubelet[3545]: W1029 23:31:59.113772 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.113992 kubelet[3545]: E1029 23:31:59.113806 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.114365 kubelet[3545]: E1029 23:31:59.114343 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.114470 kubelet[3545]: W1029 23:31:59.114449 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.114582 kubelet[3545]: E1029 23:31:59.114561 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.115186 kubelet[3545]: E1029 23:31:59.115155 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.115270 kubelet[3545]: W1029 23:31:59.115184 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.115270 kubelet[3545]: E1029 23:31:59.115223 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.116194 kubelet[3545]: E1029 23:31:59.115964 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.116194 kubelet[3545]: W1029 23:31:59.115996 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.116194 kubelet[3545]: E1029 23:31:59.116034 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.116524 kubelet[3545]: E1029 23:31:59.116492 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.116524 kubelet[3545]: W1029 23:31:59.116513 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.116614 kubelet[3545]: E1029 23:31:59.116537 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.117722 kubelet[3545]: E1029 23:31:59.117681 3545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 23:31:59.117722 kubelet[3545]: W1029 23:31:59.117717 3545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 23:31:59.117998 kubelet[3545]: E1029 23:31:59.117749 3545 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 23:31:59.288940 containerd[2015]: time="2025-10-29T23:31:59.288775941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:59.291826 containerd[2015]: time="2025-10-29T23:31:59.290705373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Oct 29 23:31:59.294368 containerd[2015]: time="2025-10-29T23:31:59.293177793Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:59.299493 containerd[2015]: time="2025-10-29T23:31:59.299439741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:31:59.300623 containerd[2015]: time="2025-10-29T23:31:59.300347733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.191186834s" Oct 29 23:31:59.300623 containerd[2015]: time="2025-10-29T23:31:59.300500253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Oct 29 23:31:59.313731 containerd[2015]: time="2025-10-29T23:31:59.313643097Z" level=info msg="CreateContainer within sandbox \"f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 23:31:59.336512 containerd[2015]: time="2025-10-29T23:31:59.335909253Z" level=info msg="Container ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:31:59.354630 containerd[2015]: time="2025-10-29T23:31:59.354553353Z" level=info msg="CreateContainer within sandbox \"f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4\"" Oct 29 23:31:59.356640 containerd[2015]: time="2025-10-29T23:31:59.356502405Z" level=info msg="StartContainer for \"ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4\"" Oct 29 23:31:59.361015 containerd[2015]: time="2025-10-29T23:31:59.360963093Z" level=info msg="connecting to shim ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4" address="unix:///run/containerd/s/66ec31aea3a22ced3d49c87db331acb8913e8718ff97ca6498b21a93e67782b1" protocol=ttrpc version=3 Oct 29 23:31:59.399992 systemd[1]: Started cri-containerd-ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4.scope - libcontainer container ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4. Oct 29 23:31:59.476789 containerd[2015]: time="2025-10-29T23:31:59.476110534Z" level=info msg="StartContainer for \"ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4\" returns successfully" Oct 29 23:31:59.511595 systemd[1]: cri-containerd-ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4.scope: Deactivated successfully. Oct 29 23:31:59.520705 containerd[2015]: time="2025-10-29T23:31:59.520417774Z" level=info msg="received exit event container_id:\"ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4\" id:\"ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4\" pid:4194 exited_at:{seconds:1761780719 nanos:519582730}" Oct 29 23:31:59.521436 containerd[2015]: time="2025-10-29T23:31:59.521397478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4\" id:\"ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4\" pid:4194 exited_at:{seconds:1761780719 nanos:519582730}" Oct 29 23:31:59.564300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae4da8ad06692c5fe99faf716fccd6939fa809f663b7b26f7f73256e691e3be4-rootfs.mount: Deactivated successfully. Oct 29 23:31:59.697776 kubelet[3545]: E1029 23:31:59.697695 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:32:00.003551 containerd[2015]: time="2025-10-29T23:32:00.003466400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 23:32:01.696325 kubelet[3545]: E1029 23:32:01.696280 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:32:03.100864 containerd[2015]: time="2025-10-29T23:32:03.100811844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:03.103441 containerd[2015]: time="2025-10-29T23:32:03.103399212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Oct 29 23:32:03.104363 containerd[2015]: time="2025-10-29T23:32:03.104289348Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:03.109706 containerd[2015]: time="2025-10-29T23:32:03.109230588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:03.110958 containerd[2015]: time="2025-10-29T23:32:03.110915340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.107382064s" Oct 29 23:32:03.111120 containerd[2015]: time="2025-10-29T23:32:03.111091608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Oct 29 23:32:03.115099 containerd[2015]: time="2025-10-29T23:32:03.115052148Z" level=info msg="CreateContainer within sandbox \"f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 23:32:03.128820 containerd[2015]: time="2025-10-29T23:32:03.128730252Z" level=info msg="Container 2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:03.149674 containerd[2015]: time="2025-10-29T23:32:03.149428932Z" level=info msg="CreateContainer within sandbox \"f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183\"" Oct 29 23:32:03.155257 containerd[2015]: time="2025-10-29T23:32:03.155175756Z" level=info msg="StartContainer for \"2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183\"" Oct 29 23:32:03.160475 containerd[2015]: time="2025-10-29T23:32:03.160398516Z" level=info msg="connecting to shim 2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183" address="unix:///run/containerd/s/66ec31aea3a22ced3d49c87db331acb8913e8718ff97ca6498b21a93e67782b1" protocol=ttrpc version=3 Oct 29 23:32:03.206021 systemd[1]: Started cri-containerd-2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183.scope - libcontainer container 2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183. Oct 29 23:32:03.291011 containerd[2015]: time="2025-10-29T23:32:03.290940229Z" level=info msg="StartContainer for \"2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183\" returns successfully" Oct 29 23:32:03.697253 kubelet[3545]: E1029 23:32:03.696316 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:32:04.269466 containerd[2015]: time="2025-10-29T23:32:04.269393678Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 23:32:04.274575 systemd[1]: cri-containerd-2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183.scope: Deactivated successfully. Oct 29 23:32:04.275204 systemd[1]: cri-containerd-2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183.scope: Consumed 920ms CPU time, 187.4M memory peak, 165.9M written to disk. Oct 29 23:32:04.280603 containerd[2015]: time="2025-10-29T23:32:04.280512074Z" level=info msg="received exit event container_id:\"2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183\" id:\"2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183\" pid:4254 exited_at:{seconds:1761780724 nanos:280248986}" Oct 29 23:32:04.281109 containerd[2015]: time="2025-10-29T23:32:04.280999202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183\" id:\"2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183\" pid:4254 exited_at:{seconds:1761780724 nanos:280248986}" Oct 29 23:32:04.322260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c493ad67356112077ee690e742326683481b39ef4fcc27e7ac154d0d37ff183-rootfs.mount: Deactivated successfully. Oct 29 23:32:04.349346 kubelet[3545]: I1029 23:32:04.349308 3545 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 23:32:04.435619 systemd[1]: Created slice kubepods-burstable-pod80643d69_11b6_49e9_90f8_d5d9f7cd74e5.slice - libcontainer container kubepods-burstable-pod80643d69_11b6_49e9_90f8_d5d9f7cd74e5.slice. Oct 29 23:32:04.454625 kubelet[3545]: I1029 23:32:04.454520 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbnrq\" (UniqueName: \"kubernetes.io/projected/a76d1c5c-b32f-4f1f-b7bc-93c38286ef75-kube-api-access-bbnrq\") pod \"calico-kube-controllers-5f68d4cfbc-mbbhz\" (UID: \"a76d1c5c-b32f-4f1f-b7bc-93c38286ef75\") " pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" Oct 29 23:32:04.454625 kubelet[3545]: I1029 23:32:04.454599 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80643d69-11b6-49e9-90f8-d5d9f7cd74e5-config-volume\") pod \"coredns-668d6bf9bc-sbvzm\" (UID: \"80643d69-11b6-49e9-90f8-d5d9f7cd74e5\") " pod="kube-system/coredns-668d6bf9bc-sbvzm" Oct 29 23:32:04.458029 kubelet[3545]: I1029 23:32:04.457898 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjvdc\" (UniqueName: \"kubernetes.io/projected/76f75cd3-d56c-4449-8d61-d2a43bd411a6-kube-api-access-mjvdc\") pod \"coredns-668d6bf9bc-t9m5c\" (UID: \"76f75cd3-d56c-4449-8d61-d2a43bd411a6\") " pod="kube-system/coredns-668d6bf9bc-t9m5c" Oct 29 23:32:04.458029 kubelet[3545]: I1029 23:32:04.458019 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c5mz\" (UniqueName: \"kubernetes.io/projected/80643d69-11b6-49e9-90f8-d5d9f7cd74e5-kube-api-access-8c5mz\") pod \"coredns-668d6bf9bc-sbvzm\" (UID: \"80643d69-11b6-49e9-90f8-d5d9f7cd74e5\") " pod="kube-system/coredns-668d6bf9bc-sbvzm" Oct 29 23:32:04.458346 kubelet[3545]: I1029 23:32:04.458071 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a76d1c5c-b32f-4f1f-b7bc-93c38286ef75-tigera-ca-bundle\") pod \"calico-kube-controllers-5f68d4cfbc-mbbhz\" (UID: \"a76d1c5c-b32f-4f1f-b7bc-93c38286ef75\") " pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" Oct 29 23:32:04.458346 kubelet[3545]: I1029 23:32:04.458113 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76f75cd3-d56c-4449-8d61-d2a43bd411a6-config-volume\") pod \"coredns-668d6bf9bc-t9m5c\" (UID: \"76f75cd3-d56c-4449-8d61-d2a43bd411a6\") " pod="kube-system/coredns-668d6bf9bc-t9m5c" Oct 29 23:32:04.477751 systemd[1]: Created slice kubepods-besteffort-poda76d1c5c_b32f_4f1f_b7bc_93c38286ef75.slice - libcontainer container kubepods-besteffort-poda76d1c5c_b32f_4f1f_b7bc_93c38286ef75.slice. Oct 29 23:32:04.503777 systemd[1]: Created slice kubepods-besteffort-pod85eb2e44_caab_4abf_88b8_85cc58798d7b.slice - libcontainer container kubepods-besteffort-pod85eb2e44_caab_4abf_88b8_85cc58798d7b.slice. Oct 29 23:32:04.507419 kubelet[3545]: W1029 23:32:04.507294 3545 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-30-28" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-28' and this object Oct 29 23:32:04.507756 kubelet[3545]: E1029 23:32:04.507704 3545 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-30-28\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-30-28' and this object" logger="UnhandledError" Oct 29 23:32:04.508051 kubelet[3545]: W1029 23:32:04.508004 3545 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ip-172-31-30-28" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-30-28' and this object Oct 29 23:32:04.509564 kubelet[3545]: E1029 23:32:04.509398 3545 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-30-28\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-30-28' and this object" logger="UnhandledError" Oct 29 23:32:04.510208 kubelet[3545]: W1029 23:32:04.508342 3545 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-28" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-30-28' and this object Oct 29 23:32:04.510208 kubelet[3545]: E1029 23:32:04.510141 3545 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-30-28\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ip-172-31-30-28' and this object" logger="UnhandledError" Oct 29 23:32:04.530763 systemd[1]: Created slice kubepods-burstable-pod76f75cd3_d56c_4449_8d61_d2a43bd411a6.slice - libcontainer container kubepods-burstable-pod76f75cd3_d56c_4449_8d61_d2a43bd411a6.slice. Oct 29 23:32:04.551516 systemd[1]: Created slice kubepods-besteffort-pod0e56094f_29e3_42d4_b70d_e871179d5468.slice - libcontainer container kubepods-besteffort-pod0e56094f_29e3_42d4_b70d_e871179d5468.slice. Oct 29 23:32:04.558976 kubelet[3545]: I1029 23:32:04.558382 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85eb2e44-caab-4abf-88b8-85cc58798d7b-whisker-ca-bundle\") pod \"whisker-6d6d968f74-gnv8p\" (UID: \"85eb2e44-caab-4abf-88b8-85cc58798d7b\") " pod="calico-system/whisker-6d6d968f74-gnv8p" Oct 29 23:32:04.563972 kubelet[3545]: I1029 23:32:04.563922 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e56094f-29e3-42d4-b70d-e871179d5468-config\") pod \"goldmane-666569f655-lq8fh\" (UID: \"0e56094f-29e3-42d4-b70d-e871179d5468\") " pod="calico-system/goldmane-666569f655-lq8fh" Oct 29 23:32:04.564191 kubelet[3545]: I1029 23:32:04.564147 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x8x2\" (UniqueName: \"kubernetes.io/projected/85eb2e44-caab-4abf-88b8-85cc58798d7b-kube-api-access-2x8x2\") pod \"whisker-6d6d968f74-gnv8p\" (UID: \"85eb2e44-caab-4abf-88b8-85cc58798d7b\") " pod="calico-system/whisker-6d6d968f74-gnv8p" Oct 29 23:32:04.565012 kubelet[3545]: I1029 23:32:04.564971 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8mh9\" (UniqueName: \"kubernetes.io/projected/2788a03f-6870-4386-aa19-59a40e87a133-kube-api-access-t8mh9\") pod \"calico-apiserver-7cfd4c4c89-58cj5\" (UID: \"2788a03f-6870-4386-aa19-59a40e87a133\") " pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" Oct 29 23:32:04.565433 kubelet[3545]: I1029 23:32:04.565229 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e56094f-29e3-42d4-b70d-e871179d5468-goldmane-ca-bundle\") pod \"goldmane-666569f655-lq8fh\" (UID: \"0e56094f-29e3-42d4-b70d-e871179d5468\") " pod="calico-system/goldmane-666569f655-lq8fh" Oct 29 23:32:04.565433 kubelet[3545]: I1029 23:32:04.565396 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0e56094f-29e3-42d4-b70d-e871179d5468-goldmane-key-pair\") pod \"goldmane-666569f655-lq8fh\" (UID: \"0e56094f-29e3-42d4-b70d-e871179d5468\") " pod="calico-system/goldmane-666569f655-lq8fh" Oct 29 23:32:04.568345 kubelet[3545]: I1029 23:32:04.565994 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fa52b929-eb21-441a-b4e7-cea898f2ddc5-calico-apiserver-certs\") pod \"calico-apiserver-7cfd4c4c89-kqbtj\" (UID: \"fa52b929-eb21-441a-b4e7-cea898f2ddc5\") " pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" Oct 29 23:32:04.569952 kubelet[3545]: I1029 23:32:04.569194 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2788a03f-6870-4386-aa19-59a40e87a133-calico-apiserver-certs\") pod \"calico-apiserver-7cfd4c4c89-58cj5\" (UID: \"2788a03f-6870-4386-aa19-59a40e87a133\") " pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" Oct 29 23:32:04.572175 kubelet[3545]: I1029 23:32:04.572059 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx6pn\" (UniqueName: \"kubernetes.io/projected/0e56094f-29e3-42d4-b70d-e871179d5468-kube-api-access-wx6pn\") pod \"goldmane-666569f655-lq8fh\" (UID: \"0e56094f-29e3-42d4-b70d-e871179d5468\") " pod="calico-system/goldmane-666569f655-lq8fh" Oct 29 23:32:04.572614 kubelet[3545]: I1029 23:32:04.572380 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85eb2e44-caab-4abf-88b8-85cc58798d7b-whisker-backend-key-pair\") pod \"whisker-6d6d968f74-gnv8p\" (UID: \"85eb2e44-caab-4abf-88b8-85cc58798d7b\") " pod="calico-system/whisker-6d6d968f74-gnv8p" Oct 29 23:32:04.572614 kubelet[3545]: I1029 23:32:04.572567 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgclx\" (UniqueName: \"kubernetes.io/projected/fa52b929-eb21-441a-b4e7-cea898f2ddc5-kube-api-access-mgclx\") pod \"calico-apiserver-7cfd4c4c89-kqbtj\" (UID: \"fa52b929-eb21-441a-b4e7-cea898f2ddc5\") " pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" Oct 29 23:32:04.574278 systemd[1]: Created slice kubepods-besteffort-podfa52b929_eb21_441a_b4e7_cea898f2ddc5.slice - libcontainer container kubepods-besteffort-podfa52b929_eb21_441a_b4e7_cea898f2ddc5.slice. Oct 29 23:32:04.587272 systemd[1]: Created slice kubepods-besteffort-pod2788a03f_6870_4386_aa19_59a40e87a133.slice - libcontainer container kubepods-besteffort-pod2788a03f_6870_4386_aa19_59a40e87a133.slice. Oct 29 23:32:04.762522 containerd[2015]: time="2025-10-29T23:32:04.762201112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sbvzm,Uid:80643d69-11b6-49e9-90f8-d5d9f7cd74e5,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:04.793743 containerd[2015]: time="2025-10-29T23:32:04.793566940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68d4cfbc-mbbhz,Uid:a76d1c5c-b32f-4f1f-b7bc-93c38286ef75,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:04.826889 containerd[2015]: time="2025-10-29T23:32:04.826769284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d6d968f74-gnv8p,Uid:85eb2e44-caab-4abf-88b8-85cc58798d7b,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:04.841192 containerd[2015]: time="2025-10-29T23:32:04.841070392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9m5c,Uid:76f75cd3-d56c-4449-8d61-d2a43bd411a6,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:05.061706 containerd[2015]: time="2025-10-29T23:32:05.061327117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 23:32:05.186728 containerd[2015]: time="2025-10-29T23:32:05.185969966Z" level=error msg="Failed to destroy network for sandbox \"8d5cc608d706294835caafdff89eab82a9755e1cfdde17e27360113ec51407b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.188908 containerd[2015]: time="2025-10-29T23:32:05.188561702Z" level=error msg="Failed to destroy network for sandbox \"c1fb56bb59f1ee23dbb942ca6636548a66b5a64b636012be5dbd0386f681bc50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.189300 containerd[2015]: time="2025-10-29T23:32:05.188830574Z" level=error msg="Failed to destroy network for sandbox \"f5537da11fbed495046198d6f924a28c1546146469123e20bcb67f07f5ef9874\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.190530 containerd[2015]: time="2025-10-29T23:32:05.190443794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68d4cfbc-mbbhz,Uid:a76d1c5c-b32f-4f1f-b7bc-93c38286ef75,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d5cc608d706294835caafdff89eab82a9755e1cfdde17e27360113ec51407b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.191300 kubelet[3545]: E1029 23:32:05.190988 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d5cc608d706294835caafdff89eab82a9755e1cfdde17e27360113ec51407b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.192545 kubelet[3545]: E1029 23:32:05.191838 3545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d5cc608d706294835caafdff89eab82a9755e1cfdde17e27360113ec51407b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" Oct 29 23:32:05.192545 kubelet[3545]: E1029 23:32:05.191887 3545 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d5cc608d706294835caafdff89eab82a9755e1cfdde17e27360113ec51407b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" Oct 29 23:32:05.193303 kubelet[3545]: E1029 23:32:05.191982 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f68d4cfbc-mbbhz_calico-system(a76d1c5c-b32f-4f1f-b7bc-93c38286ef75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f68d4cfbc-mbbhz_calico-system(a76d1c5c-b32f-4f1f-b7bc-93c38286ef75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d5cc608d706294835caafdff89eab82a9755e1cfdde17e27360113ec51407b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:32:05.194640 containerd[2015]: time="2025-10-29T23:32:05.194354702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sbvzm,Uid:80643d69-11b6-49e9-90f8-d5d9f7cd74e5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fb56bb59f1ee23dbb942ca6636548a66b5a64b636012be5dbd0386f681bc50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.195079 kubelet[3545]: E1029 23:32:05.195014 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fb56bb59f1ee23dbb942ca6636548a66b5a64b636012be5dbd0386f681bc50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.195394 kubelet[3545]: E1029 23:32:05.195216 3545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fb56bb59f1ee23dbb942ca6636548a66b5a64b636012be5dbd0386f681bc50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sbvzm" Oct 29 23:32:05.195394 kubelet[3545]: E1029 23:32:05.195256 3545 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1fb56bb59f1ee23dbb942ca6636548a66b5a64b636012be5dbd0386f681bc50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sbvzm" Oct 29 23:32:05.195394 kubelet[3545]: E1029 23:32:05.195330 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sbvzm_kube-system(80643d69-11b6-49e9-90f8-d5d9f7cd74e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sbvzm_kube-system(80643d69-11b6-49e9-90f8-d5d9f7cd74e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1fb56bb59f1ee23dbb942ca6636548a66b5a64b636012be5dbd0386f681bc50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sbvzm" podUID="80643d69-11b6-49e9-90f8-d5d9f7cd74e5" Oct 29 23:32:05.196839 containerd[2015]: time="2025-10-29T23:32:05.196638314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d6d968f74-gnv8p,Uid:85eb2e44-caab-4abf-88b8-85cc58798d7b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5537da11fbed495046198d6f924a28c1546146469123e20bcb67f07f5ef9874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.197714 kubelet[3545]: E1029 23:32:05.197281 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5537da11fbed495046198d6f924a28c1546146469123e20bcb67f07f5ef9874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.197714 kubelet[3545]: E1029 23:32:05.197355 3545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5537da11fbed495046198d6f924a28c1546146469123e20bcb67f07f5ef9874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d6d968f74-gnv8p" Oct 29 23:32:05.197714 kubelet[3545]: E1029 23:32:05.197391 3545 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5537da11fbed495046198d6f924a28c1546146469123e20bcb67f07f5ef9874\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d6d968f74-gnv8p" Oct 29 23:32:05.198003 kubelet[3545]: E1029 23:32:05.197450 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d6d968f74-gnv8p_calico-system(85eb2e44-caab-4abf-88b8-85cc58798d7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d6d968f74-gnv8p_calico-system(85eb2e44-caab-4abf-88b8-85cc58798d7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5537da11fbed495046198d6f924a28c1546146469123e20bcb67f07f5ef9874\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d6d968f74-gnv8p" podUID="85eb2e44-caab-4abf-88b8-85cc58798d7b" Oct 29 23:32:05.206934 containerd[2015]: time="2025-10-29T23:32:05.206789030Z" level=error msg="Failed to destroy network for sandbox \"6e482a31a82c24eaece332a0f1a24f963051e525ea096a3615a2cad4f85ade7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.211511 containerd[2015]: time="2025-10-29T23:32:05.211358666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9m5c,Uid:76f75cd3-d56c-4449-8d61-d2a43bd411a6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e482a31a82c24eaece332a0f1a24f963051e525ea096a3615a2cad4f85ade7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.212091 kubelet[3545]: E1029 23:32:05.212027 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e482a31a82c24eaece332a0f1a24f963051e525ea096a3615a2cad4f85ade7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.212192 kubelet[3545]: E1029 23:32:05.212125 3545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e482a31a82c24eaece332a0f1a24f963051e525ea096a3615a2cad4f85ade7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9m5c" Oct 29 23:32:05.212192 kubelet[3545]: E1029 23:32:05.212162 3545 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e482a31a82c24eaece332a0f1a24f963051e525ea096a3615a2cad4f85ade7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9m5c" Oct 29 23:32:05.212317 kubelet[3545]: E1029 23:32:05.212246 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t9m5c_kube-system(76f75cd3-d56c-4449-8d61-d2a43bd411a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t9m5c_kube-system(76f75cd3-d56c-4449-8d61-d2a43bd411a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e482a31a82c24eaece332a0f1a24f963051e525ea096a3615a2cad4f85ade7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t9m5c" podUID="76f75cd3-d56c-4449-8d61-d2a43bd411a6" Oct 29 23:32:05.677840 kubelet[3545]: E1029 23:32:05.677774 3545 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Oct 29 23:32:05.678039 kubelet[3545]: E1029 23:32:05.677924 3545 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e56094f-29e3-42d4-b70d-e871179d5468-goldmane-key-pair podName:0e56094f-29e3-42d4-b70d-e871179d5468 nodeName:}" failed. No retries permitted until 2025-10-29 23:32:06.177875781 +0000 UTC m=+46.773013379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/0e56094f-29e3-42d4-b70d-e871179d5468-goldmane-key-pair") pod "goldmane-666569f655-lq8fh" (UID: "0e56094f-29e3-42d4-b70d-e871179d5468") : failed to sync secret cache: timed out waiting for the condition Oct 29 23:32:05.709235 systemd[1]: Created slice kubepods-besteffort-poda3348575_d754_476b_94b5_28b2df5efe85.slice - libcontainer container kubepods-besteffort-poda3348575_d754_476b_94b5_28b2df5efe85.slice. Oct 29 23:32:05.716163 containerd[2015]: time="2025-10-29T23:32:05.716044229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm5cb,Uid:a3348575-d754-476b-94b5-28b2df5efe85,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:05.801991 containerd[2015]: time="2025-10-29T23:32:05.801751481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd4c4c89-58cj5,Uid:2788a03f-6870-4386-aa19-59a40e87a133,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:32:05.803816 containerd[2015]: time="2025-10-29T23:32:05.803736533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd4c4c89-kqbtj,Uid:fa52b929-eb21-441a-b4e7-cea898f2ddc5,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:32:05.841749 containerd[2015]: time="2025-10-29T23:32:05.841534361Z" level=error msg="Failed to destroy network for sandbox \"f136b1bac45613f30a69b7b135de4d7da4f6b414832cee5fe94dc83c392a3423\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.843878 containerd[2015]: time="2025-10-29T23:32:05.843560921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm5cb,Uid:a3348575-d754-476b-94b5-28b2df5efe85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f136b1bac45613f30a69b7b135de4d7da4f6b414832cee5fe94dc83c392a3423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.844149 kubelet[3545]: E1029 23:32:05.844037 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f136b1bac45613f30a69b7b135de4d7da4f6b414832cee5fe94dc83c392a3423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.844149 kubelet[3545]: E1029 23:32:05.844111 3545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f136b1bac45613f30a69b7b135de4d7da4f6b414832cee5fe94dc83c392a3423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wm5cb" Oct 29 23:32:05.844390 kubelet[3545]: E1029 23:32:05.844152 3545 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f136b1bac45613f30a69b7b135de4d7da4f6b414832cee5fe94dc83c392a3423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wm5cb" Oct 29 23:32:05.844390 kubelet[3545]: E1029 23:32:05.844235 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f136b1bac45613f30a69b7b135de4d7da4f6b414832cee5fe94dc83c392a3423\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:32:05.957350 containerd[2015]: time="2025-10-29T23:32:05.957058542Z" level=error msg="Failed to destroy network for sandbox \"beb6b3c14d499ed4e3ca690f4574ca7042d53046fd30cb8548469cff8baadea7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.960155 containerd[2015]: time="2025-10-29T23:32:05.959992434Z" level=error msg="Failed to destroy network for sandbox \"846a986d56498edba24bb132e467d3e17fff1ac9561aac02ccfc44f077d9d099\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.960155 containerd[2015]: time="2025-10-29T23:32:05.960062502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd4c4c89-58cj5,Uid:2788a03f-6870-4386-aa19-59a40e87a133,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb6b3c14d499ed4e3ca690f4574ca7042d53046fd30cb8548469cff8baadea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.960820 kubelet[3545]: E1029 23:32:05.960397 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb6b3c14d499ed4e3ca690f4574ca7042d53046fd30cb8548469cff8baadea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.960820 kubelet[3545]: E1029 23:32:05.960469 3545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb6b3c14d499ed4e3ca690f4574ca7042d53046fd30cb8548469cff8baadea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" Oct 29 23:32:05.960820 kubelet[3545]: E1029 23:32:05.960510 3545 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beb6b3c14d499ed4e3ca690f4574ca7042d53046fd30cb8548469cff8baadea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" Oct 29 23:32:05.961003 kubelet[3545]: E1029 23:32:05.960588 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cfd4c4c89-58cj5_calico-apiserver(2788a03f-6870-4386-aa19-59a40e87a133)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cfd4c4c89-58cj5_calico-apiserver(2788a03f-6870-4386-aa19-59a40e87a133)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"beb6b3c14d499ed4e3ca690f4574ca7042d53046fd30cb8548469cff8baadea7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:32:05.962466 containerd[2015]: time="2025-10-29T23:32:05.962012730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd4c4c89-kqbtj,Uid:fa52b929-eb21-441a-b4e7-cea898f2ddc5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"846a986d56498edba24bb132e467d3e17fff1ac9561aac02ccfc44f077d9d099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.963017 kubelet[3545]: E1029 23:32:05.962605 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"846a986d56498edba24bb132e467d3e17fff1ac9561aac02ccfc44f077d9d099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:05.963294 kubelet[3545]: E1029 23:32:05.963113 3545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"846a986d56498edba24bb132e467d3e17fff1ac9561aac02ccfc44f077d9d099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" Oct 29 23:32:05.963294 kubelet[3545]: E1029 23:32:05.963163 3545 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"846a986d56498edba24bb132e467d3e17fff1ac9561aac02ccfc44f077d9d099\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" Oct 29 23:32:05.963817 kubelet[3545]: E1029 23:32:05.963635 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cfd4c4c89-kqbtj_calico-apiserver(fa52b929-eb21-441a-b4e7-cea898f2ddc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cfd4c4c89-kqbtj_calico-apiserver(fa52b929-eb21-441a-b4e7-cea898f2ddc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"846a986d56498edba24bb132e467d3e17fff1ac9561aac02ccfc44f077d9d099\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:32:06.329104 systemd[1]: run-netns-cni\x2d66247fd0\x2d70d6\x2d4b27\x2d7701\x2d58341c48c3d4.mount: Deactivated successfully. Oct 29 23:32:06.329413 systemd[1]: run-netns-cni\x2d729a9faf\x2d29f8\x2d6443\x2d29cc\x2dc782d0762fad.mount: Deactivated successfully. Oct 29 23:32:06.366444 containerd[2015]: time="2025-10-29T23:32:06.366180112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lq8fh,Uid:0e56094f-29e3-42d4-b70d-e871179d5468,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:06.525858 containerd[2015]: time="2025-10-29T23:32:06.525282089Z" level=error msg="Failed to destroy network for sandbox \"7b2aeea57743154186338f4a37599c3574d529cd0051eba516619f1b0b114ea4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:06.530797 containerd[2015]: time="2025-10-29T23:32:06.530362625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lq8fh,Uid:0e56094f-29e3-42d4-b70d-e871179d5468,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2aeea57743154186338f4a37599c3574d529cd0051eba516619f1b0b114ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:06.530974 kubelet[3545]: E1029 23:32:06.530845 3545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2aeea57743154186338f4a37599c3574d529cd0051eba516619f1b0b114ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 23:32:06.531435 kubelet[3545]: E1029 23:32:06.530943 3545 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2aeea57743154186338f4a37599c3574d529cd0051eba516619f1b0b114ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lq8fh" Oct 29 23:32:06.531435 kubelet[3545]: E1029 23:32:06.531336 3545 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2aeea57743154186338f4a37599c3574d529cd0051eba516619f1b0b114ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lq8fh" Oct 29 23:32:06.532824 kubelet[3545]: E1029 23:32:06.531448 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lq8fh_calico-system(0e56094f-29e3-42d4-b70d-e871179d5468)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lq8fh_calico-system(0e56094f-29e3-42d4-b70d-e871179d5468)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b2aeea57743154186338f4a37599c3574d529cd0051eba516619f1b0b114ea4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:32:06.536584 systemd[1]: run-netns-cni\x2d2d60c2f6\x2dc8a9\x2de6e4\x2d45a6\x2d9007e01e9b13.mount: Deactivated successfully. Oct 29 23:32:11.137337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2151654669.mount: Deactivated successfully. Oct 29 23:32:11.186231 containerd[2015]: time="2025-10-29T23:32:11.186158720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:11.187598 containerd[2015]: time="2025-10-29T23:32:11.187274228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Oct 29 23:32:11.188514 containerd[2015]: time="2025-10-29T23:32:11.188459480Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:11.191819 containerd[2015]: time="2025-10-29T23:32:11.191769128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 29 23:32:11.193792 containerd[2015]: time="2025-10-29T23:32:11.193044524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.130704642s" Oct 29 23:32:11.193792 containerd[2015]: time="2025-10-29T23:32:11.193099340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Oct 29 23:32:11.220494 containerd[2015]: time="2025-10-29T23:32:11.220436084Z" level=info msg="CreateContainer within sandbox \"f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 23:32:11.243370 containerd[2015]: time="2025-10-29T23:32:11.243320012Z" level=info msg="Container 148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:11.265891 containerd[2015]: time="2025-10-29T23:32:11.265788836Z" level=info msg="CreateContainer within sandbox \"f2d1bb89f20d0dc0093ba5a7c1e899adeefb2bb2c4210b7678450fa85dfc546d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a\"" Oct 29 23:32:11.267512 containerd[2015]: time="2025-10-29T23:32:11.267431300Z" level=info msg="StartContainer for \"148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a\"" Oct 29 23:32:11.271064 containerd[2015]: time="2025-10-29T23:32:11.270931664Z" level=info msg="connecting to shim 148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a" address="unix:///run/containerd/s/66ec31aea3a22ced3d49c87db331acb8913e8718ff97ca6498b21a93e67782b1" protocol=ttrpc version=3 Oct 29 23:32:11.311028 systemd[1]: Started cri-containerd-148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a.scope - libcontainer container 148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a. Oct 29 23:32:11.412421 containerd[2015]: time="2025-10-29T23:32:11.412000593Z" level=info msg="StartContainer for \"148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a\" returns successfully" Oct 29 23:32:11.672518 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 23:32:11.672834 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 23:32:12.040686 kubelet[3545]: I1029 23:32:12.038942 3545 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85eb2e44-caab-4abf-88b8-85cc58798d7b-whisker-ca-bundle\") pod \"85eb2e44-caab-4abf-88b8-85cc58798d7b\" (UID: \"85eb2e44-caab-4abf-88b8-85cc58798d7b\") " Oct 29 23:32:12.040686 kubelet[3545]: I1029 23:32:12.039017 3545 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x8x2\" (UniqueName: \"kubernetes.io/projected/85eb2e44-caab-4abf-88b8-85cc58798d7b-kube-api-access-2x8x2\") pod \"85eb2e44-caab-4abf-88b8-85cc58798d7b\" (UID: \"85eb2e44-caab-4abf-88b8-85cc58798d7b\") " Oct 29 23:32:12.040686 kubelet[3545]: I1029 23:32:12.039085 3545 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85eb2e44-caab-4abf-88b8-85cc58798d7b-whisker-backend-key-pair\") pod \"85eb2e44-caab-4abf-88b8-85cc58798d7b\" (UID: \"85eb2e44-caab-4abf-88b8-85cc58798d7b\") " Oct 29 23:32:12.042167 kubelet[3545]: I1029 23:32:12.042110 3545 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85eb2e44-caab-4abf-88b8-85cc58798d7b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "85eb2e44-caab-4abf-88b8-85cc58798d7b" (UID: "85eb2e44-caab-4abf-88b8-85cc58798d7b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 23:32:12.055948 kubelet[3545]: I1029 23:32:12.055844 3545 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85eb2e44-caab-4abf-88b8-85cc58798d7b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "85eb2e44-caab-4abf-88b8-85cc58798d7b" (UID: "85eb2e44-caab-4abf-88b8-85cc58798d7b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 23:32:12.056371 kubelet[3545]: I1029 23:32:12.056279 3545 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85eb2e44-caab-4abf-88b8-85cc58798d7b-kube-api-access-2x8x2" (OuterVolumeSpecName: "kube-api-access-2x8x2") pod "85eb2e44-caab-4abf-88b8-85cc58798d7b" (UID: "85eb2e44-caab-4abf-88b8-85cc58798d7b"). InnerVolumeSpecName "kube-api-access-2x8x2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 23:32:12.139764 kubelet[3545]: I1029 23:32:12.139532 3545 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85eb2e44-caab-4abf-88b8-85cc58798d7b-whisker-backend-key-pair\") on node \"ip-172-31-30-28\" DevicePath \"\"" Oct 29 23:32:12.139764 kubelet[3545]: I1029 23:32:12.139593 3545 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85eb2e44-caab-4abf-88b8-85cc58798d7b-whisker-ca-bundle\") on node \"ip-172-31-30-28\" DevicePath \"\"" Oct 29 23:32:12.139764 kubelet[3545]: I1029 23:32:12.139619 3545 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2x8x2\" (UniqueName: \"kubernetes.io/projected/85eb2e44-caab-4abf-88b8-85cc58798d7b-kube-api-access-2x8x2\") on node \"ip-172-31-30-28\" DevicePath \"\"" Oct 29 23:32:12.141390 systemd[1]: var-lib-kubelet-pods-85eb2e44\x2dcaab\x2d4abf\x2d88b8\x2d85cc58798d7b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2x8x2.mount: Deactivated successfully. Oct 29 23:32:12.143132 systemd[1]: var-lib-kubelet-pods-85eb2e44\x2dcaab\x2d4abf\x2d88b8\x2d85cc58798d7b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 23:32:12.165311 systemd[1]: Removed slice kubepods-besteffort-pod85eb2e44_caab_4abf_88b8_85cc58798d7b.slice - libcontainer container kubepods-besteffort-pod85eb2e44_caab_4abf_88b8_85cc58798d7b.slice. Oct 29 23:32:12.233525 kubelet[3545]: I1029 23:32:12.233427 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r6f46" podStartSLOduration=2.22477605 podStartE2EDuration="17.233394933s" podCreationTimestamp="2025-10-29 23:31:55 +0000 UTC" firstStartedPulling="2025-10-29 23:31:56.186222941 +0000 UTC m=+36.781360539" lastFinishedPulling="2025-10-29 23:32:11.194841824 +0000 UTC m=+51.789979422" observedRunningTime="2025-10-29 23:32:12.202201545 +0000 UTC m=+52.797339167" watchObservedRunningTime="2025-10-29 23:32:12.233394933 +0000 UTC m=+52.828532531" Oct 29 23:32:12.369122 systemd[1]: Created slice kubepods-besteffort-pode0ab4b8c_c86a_446f_bf29_1179e47cdecc.slice - libcontainer container kubepods-besteffort-pode0ab4b8c_c86a_446f_bf29_1179e47cdecc.slice. Oct 29 23:32:12.443689 kubelet[3545]: I1029 23:32:12.443606 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ab4b8c-c86a-446f-bf29-1179e47cdecc-whisker-ca-bundle\") pod \"whisker-6c7fdf84f6-glqpf\" (UID: \"e0ab4b8c-c86a-446f-bf29-1179e47cdecc\") " pod="calico-system/whisker-6c7fdf84f6-glqpf" Oct 29 23:32:12.444114 kubelet[3545]: I1029 23:32:12.444074 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0ab4b8c-c86a-446f-bf29-1179e47cdecc-whisker-backend-key-pair\") pod \"whisker-6c7fdf84f6-glqpf\" (UID: \"e0ab4b8c-c86a-446f-bf29-1179e47cdecc\") " pod="calico-system/whisker-6c7fdf84f6-glqpf" Oct 29 23:32:12.444262 kubelet[3545]: I1029 23:32:12.444238 3545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqwg\" (UniqueName: \"kubernetes.io/projected/e0ab4b8c-c86a-446f-bf29-1179e47cdecc-kube-api-access-bgqwg\") pod \"whisker-6c7fdf84f6-glqpf\" (UID: \"e0ab4b8c-c86a-446f-bf29-1179e47cdecc\") " pod="calico-system/whisker-6c7fdf84f6-glqpf" Oct 29 23:32:12.519414 containerd[2015]: time="2025-10-29T23:32:12.519317771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a\" id:\"b7ce2fb7a1d70068f6a423a1a20daa26c8296a30359ead26247ebf0bfa4f7256\" pid:4574 exit_status:1 exited_at:{seconds:1761780732 nanos:518474495}" Oct 29 23:32:12.681284 containerd[2015]: time="2025-10-29T23:32:12.681171011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7fdf84f6-glqpf,Uid:e0ab4b8c-c86a-446f-bf29-1179e47cdecc,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:13.102236 (udev-worker)[4548]: Network interface NamePolicy= disabled on kernel command line. Oct 29 23:32:13.104270 systemd-networkd[1811]: cali6d73300dc68: Link UP Oct 29 23:32:13.107865 systemd-networkd[1811]: cali6d73300dc68: Gained carrier Oct 29 23:32:13.144961 containerd[2015]: 2025-10-29 23:32:12.746 [INFO][4589] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 23:32:13.144961 containerd[2015]: 2025-10-29 23:32:12.833 [INFO][4589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0 whisker-6c7fdf84f6- calico-system e0ab4b8c-c86a-446f-bf29-1179e47cdecc 922 0 2025-10-29 23:32:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c7fdf84f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-28 whisker-6c7fdf84f6-glqpf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6d73300dc68 [] [] }} ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Namespace="calico-system" Pod="whisker-6c7fdf84f6-glqpf" WorkloadEndpoint="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-" Oct 29 23:32:13.144961 containerd[2015]: 2025-10-29 23:32:12.834 [INFO][4589] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Namespace="calico-system" Pod="whisker-6c7fdf84f6-glqpf" WorkloadEndpoint="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" Oct 29 23:32:13.144961 containerd[2015]: 2025-10-29 23:32:12.941 [INFO][4607] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" HandleID="k8s-pod-network.18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Workload="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" Oct 29 23:32:13.145292 containerd[2015]: 2025-10-29 23:32:12.942 [INFO][4607] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" HandleID="k8s-pod-network.18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Workload="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c200), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-28", "pod":"whisker-6c7fdf84f6-glqpf", "timestamp":"2025-10-29 23:32:12.941906089 +0000 UTC"}, Hostname:"ip-172-31-30-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:13.145292 containerd[2015]: 2025-10-29 23:32:12.942 [INFO][4607] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:13.145292 containerd[2015]: 2025-10-29 23:32:12.942 [INFO][4607] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:13.145292 containerd[2015]: 2025-10-29 23:32:12.943 [INFO][4607] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-28' Oct 29 23:32:13.145292 containerd[2015]: 2025-10-29 23:32:12.996 [INFO][4607] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" host="ip-172-31-30-28" Oct 29 23:32:13.145292 containerd[2015]: 2025-10-29 23:32:13.017 [INFO][4607] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-28" Oct 29 23:32:13.145292 containerd[2015]: 2025-10-29 23:32:13.035 [INFO][4607] ipam/ipam.go 511: Trying affinity for 192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:13.145292 containerd[2015]: 2025-10-29 23:32:13.045 [INFO][4607] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:13.145292 containerd[2015]: 2025-10-29 23:32:13.050 [INFO][4607] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:13.146270 containerd[2015]: 2025-10-29 23:32:13.051 [INFO][4607] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" host="ip-172-31-30-28" Oct 29 23:32:13.146270 containerd[2015]: 2025-10-29 23:32:13.055 [INFO][4607] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023 Oct 29 23:32:13.146270 containerd[2015]: 2025-10-29 23:32:13.064 [INFO][4607] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" host="ip-172-31-30-28" Oct 29 23:32:13.146270 containerd[2015]: 2025-10-29 23:32:13.079 [INFO][4607] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.193/26] block=192.168.33.192/26 handle="k8s-pod-network.18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" host="ip-172-31-30-28" Oct 29 23:32:13.146270 containerd[2015]: 2025-10-29 23:32:13.079 [INFO][4607] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.193/26] handle="k8s-pod-network.18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" host="ip-172-31-30-28" Oct 29 23:32:13.146270 containerd[2015]: 2025-10-29 23:32:13.079 [INFO][4607] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:13.146270 containerd[2015]: 2025-10-29 23:32:13.079 [INFO][4607] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.193/26] IPv6=[] ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" HandleID="k8s-pod-network.18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Workload="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" Oct 29 23:32:13.146583 containerd[2015]: 2025-10-29 23:32:13.086 [INFO][4589] cni-plugin/k8s.go 418: Populated endpoint ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Namespace="calico-system" Pod="whisker-6c7fdf84f6-glqpf" WorkloadEndpoint="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0", GenerateName:"whisker-6c7fdf84f6-", Namespace:"calico-system", SelfLink:"", UID:"e0ab4b8c-c86a-446f-bf29-1179e47cdecc", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c7fdf84f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"", Pod:"whisker-6c7fdf84f6-glqpf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6d73300dc68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:13.146583 containerd[2015]: 2025-10-29 23:32:13.086 [INFO][4589] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.193/32] ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Namespace="calico-system" Pod="whisker-6c7fdf84f6-glqpf" WorkloadEndpoint="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" Oct 29 23:32:13.146952 containerd[2015]: 2025-10-29 23:32:13.087 [INFO][4589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d73300dc68 ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Namespace="calico-system" Pod="whisker-6c7fdf84f6-glqpf" WorkloadEndpoint="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" Oct 29 23:32:13.146952 containerd[2015]: 2025-10-29 23:32:13.110 [INFO][4589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Namespace="calico-system" Pod="whisker-6c7fdf84f6-glqpf" WorkloadEndpoint="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" Oct 29 23:32:13.147065 containerd[2015]: 2025-10-29 23:32:13.111 [INFO][4589] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Namespace="calico-system" Pod="whisker-6c7fdf84f6-glqpf" WorkloadEndpoint="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0", GenerateName:"whisker-6c7fdf84f6-", Namespace:"calico-system", SelfLink:"", UID:"e0ab4b8c-c86a-446f-bf29-1179e47cdecc", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 32, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c7fdf84f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023", Pod:"whisker-6c7fdf84f6-glqpf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.33.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6d73300dc68", MAC:"d2:44:7b:b2:6f:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:13.147190 containerd[2015]: 2025-10-29 23:32:13.136 [INFO][4589] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" Namespace="calico-system" Pod="whisker-6c7fdf84f6-glqpf" WorkloadEndpoint="ip--172--31--30--28-k8s-whisker--6c7fdf84f6--glqpf-eth0" Oct 29 23:32:13.228238 containerd[2015]: time="2025-10-29T23:32:13.227480338Z" level=info msg="connecting to shim 18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023" address="unix:///run/containerd/s/616790df86532f27c20add50937d67aff49aeb5587419fb23d47f6235337368e" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:13.283009 systemd[1]: Started cri-containerd-18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023.scope - libcontainer container 18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023. Oct 29 23:32:13.305635 systemd[1]: Started sshd@9-172.31.30.28:22-139.178.89.65:56312.service - OpenSSH per-connection server daemon (139.178.89.65:56312). Oct 29 23:32:13.471261 containerd[2015]: time="2025-10-29T23:32:13.470881175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7fdf84f6-glqpf,Uid:e0ab4b8c-c86a-446f-bf29-1179e47cdecc,Namespace:calico-system,Attempt:0,} returns sandbox id \"18ea39d26b70040a56ef792689245c6f3eb905cea54a449700d655de34e60023\"" Oct 29 23:32:13.477274 containerd[2015]: time="2025-10-29T23:32:13.477195587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 23:32:13.543463 sshd[4680]: Accepted publickey for core from 139.178.89.65 port 56312 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:13.547982 sshd-session[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:13.560205 systemd-logind[1985]: New session 10 of user core. Oct 29 23:32:13.570798 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 29 23:32:13.615918 containerd[2015]: time="2025-10-29T23:32:13.615870144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a\" id:\"e8a6fe3a2c354589eb38dd055da5453d928ac37fff951a2d05153bee2e588703\" pid:4637 exit_status:1 exited_at:{seconds:1761780733 nanos:615495660}" Oct 29 23:32:13.714509 kubelet[3545]: I1029 23:32:13.714414 3545 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85eb2e44-caab-4abf-88b8-85cc58798d7b" path="/var/lib/kubelet/pods/85eb2e44-caab-4abf-88b8-85cc58798d7b/volumes" Oct 29 23:32:13.764543 containerd[2015]: time="2025-10-29T23:32:13.763957129Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:13.766159 containerd[2015]: time="2025-10-29T23:32:13.766089541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 23:32:13.766868 containerd[2015]: time="2025-10-29T23:32:13.766313233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 23:32:13.767463 kubelet[3545]: E1029 23:32:13.767147 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:13.767463 kubelet[3545]: E1029 23:32:13.767211 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:13.773170 kubelet[3545]: E1029 23:32:13.773062 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:752c5f95daab45aab789fd633a80c4d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c7fdf84f6-glqpf_calico-system(e0ab4b8c-c86a-446f-bf29-1179e47cdecc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:13.777683 containerd[2015]: time="2025-10-29T23:32:13.777618037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 23:32:13.896146 sshd[4697]: Connection closed by 139.178.89.65 port 56312 Oct 29 23:32:13.899717 sshd-session[4680]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:13.913263 systemd[1]: sshd@9-172.31.30.28:22-139.178.89.65:56312.service: Deactivated successfully. Oct 29 23:32:13.921270 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 23:32:13.923431 systemd-logind[1985]: Session 10 logged out. Waiting for processes to exit. Oct 29 23:32:13.928378 systemd-logind[1985]: Removed session 10. Oct 29 23:32:14.095452 containerd[2015]: time="2025-10-29T23:32:14.095228062Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:14.097199 containerd[2015]: time="2025-10-29T23:32:14.097021306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 23:32:14.097199 containerd[2015]: time="2025-10-29T23:32:14.097048750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 23:32:14.099411 kubelet[3545]: E1029 23:32:14.099289 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:14.099411 kubelet[3545]: E1029 23:32:14.099357 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:14.099917 kubelet[3545]: E1029 23:32:14.099782 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c7fdf84f6-glqpf_calico-system(e0ab4b8c-c86a-446f-bf29-1179e47cdecc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:14.103681 kubelet[3545]: E1029 23:32:14.101517 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:32:14.115703 kubelet[3545]: E1029 23:32:14.115579 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:32:14.339507 systemd-networkd[1811]: cali6d73300dc68: Gained IPv6LL Oct 29 23:32:15.121037 kubelet[3545]: E1029 23:32:15.120906 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:32:15.284000 systemd-networkd[1811]: vxlan.calico: Link UP Oct 29 23:32:15.284020 systemd-networkd[1811]: vxlan.calico: Gained carrier Oct 29 23:32:15.418092 (udev-worker)[4547]: Network interface NamePolicy= disabled on kernel command line. Oct 29 23:32:16.697943 containerd[2015]: time="2025-10-29T23:32:16.697867755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sbvzm,Uid:80643d69-11b6-49e9-90f8-d5d9f7cd74e5,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:16.699908 containerd[2015]: time="2025-10-29T23:32:16.697894995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9m5c,Uid:76f75cd3-d56c-4449-8d61-d2a43bd411a6,Namespace:kube-system,Attempt:0,}" Oct 29 23:32:17.018882 systemd-networkd[1811]: cali2fc2753d94d: Link UP Oct 29 23:32:17.022513 systemd-networkd[1811]: cali2fc2753d94d: Gained carrier Oct 29 23:32:17.058414 containerd[2015]: 2025-10-29 23:32:16.841 [INFO][4912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0 coredns-668d6bf9bc- kube-system 76f75cd3-d56c-4449-8d61-d2a43bd411a6 862 0 2025-10-29 23:31:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-28 coredns-668d6bf9bc-t9m5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2fc2753d94d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9m5c" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-" Oct 29 23:32:17.058414 containerd[2015]: 2025-10-29 23:32:16.842 [INFO][4912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9m5c" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" Oct 29 23:32:17.058414 containerd[2015]: 2025-10-29 23:32:16.917 [INFO][4929] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" HandleID="k8s-pod-network.341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Workload="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.917 [INFO][4929] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" HandleID="k8s-pod-network.341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Workload="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb2d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-28", "pod":"coredns-668d6bf9bc-t9m5c", "timestamp":"2025-10-29 23:32:16.917508676 +0000 UTC"}, Hostname:"ip-172-31-30-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.920 [INFO][4929] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.920 [INFO][4929] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.921 [INFO][4929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-28' Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.938 [INFO][4929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" host="ip-172-31-30-28" Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.956 [INFO][4929] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-28" Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.968 [INFO][4929] ipam/ipam.go 511: Trying affinity for 192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.971 [INFO][4929] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.975 [INFO][4929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:17.059280 containerd[2015]: 2025-10-29 23:32:16.975 [INFO][4929] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" host="ip-172-31-30-28" Oct 29 23:32:17.061702 containerd[2015]: 2025-10-29 23:32:16.978 [INFO][4929] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5 Oct 29 23:32:17.061702 containerd[2015]: 2025-10-29 23:32:16.985 [INFO][4929] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" host="ip-172-31-30-28" Oct 29 23:32:17.061702 containerd[2015]: 2025-10-29 23:32:16.998 [INFO][4929] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.194/26] block=192.168.33.192/26 handle="k8s-pod-network.341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" host="ip-172-31-30-28" Oct 29 23:32:17.061702 containerd[2015]: 2025-10-29 23:32:16.999 [INFO][4929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.194/26] handle="k8s-pod-network.341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" host="ip-172-31-30-28" Oct 29 23:32:17.061702 containerd[2015]: 2025-10-29 23:32:17.001 [INFO][4929] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:17.061702 containerd[2015]: 2025-10-29 23:32:17.002 [INFO][4929] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.194/26] IPv6=[] ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" HandleID="k8s-pod-network.341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Workload="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" Oct 29 23:32:17.063060 containerd[2015]: 2025-10-29 23:32:17.007 [INFO][4912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9m5c" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"76f75cd3-d56c-4449-8d61-d2a43bd411a6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"", Pod:"coredns-668d6bf9bc-t9m5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fc2753d94d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:17.063060 containerd[2015]: 2025-10-29 23:32:17.008 [INFO][4912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.194/32] ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9m5c" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" Oct 29 23:32:17.063060 containerd[2015]: 2025-10-29 23:32:17.008 [INFO][4912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fc2753d94d ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9m5c" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" Oct 29 23:32:17.063060 containerd[2015]: 2025-10-29 23:32:17.023 [INFO][4912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9m5c" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" Oct 29 23:32:17.063060 containerd[2015]: 2025-10-29 23:32:17.026 [INFO][4912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9m5c" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"76f75cd3-d56c-4449-8d61-d2a43bd411a6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5", Pod:"coredns-668d6bf9bc-t9m5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2fc2753d94d", MAC:"06:02:14:7f:16:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:17.063060 containerd[2015]: 2025-10-29 23:32:17.053 [INFO][4912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9m5c" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--t9m5c-eth0" Oct 29 23:32:17.090336 systemd-networkd[1811]: vxlan.calico: Gained IPv6LL Oct 29 23:32:17.130989 containerd[2015]: time="2025-10-29T23:32:17.130917757Z" level=info msg="connecting to shim 341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5" address="unix:///run/containerd/s/2f1515011b836beccd49c6a9927154ecfc3e871a7d3eb82b435724b34784c02a" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:17.147231 systemd-networkd[1811]: cali81ca7fa42cc: Link UP Oct 29 23:32:17.149181 systemd-networkd[1811]: cali81ca7fa42cc: Gained carrier Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:16.854 [INFO][4904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0 coredns-668d6bf9bc- kube-system 80643d69-11b6-49e9-90f8-d5d9f7cd74e5 860 0 2025-10-29 23:31:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-28 coredns-668d6bf9bc-sbvzm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81ca7fa42cc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-sbvzm" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:16.855 [INFO][4904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-sbvzm" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:16.956 [INFO][4935] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" HandleID="k8s-pod-network.0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Workload="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:16.957 [INFO][4935] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" HandleID="k8s-pod-network.0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Workload="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c9b10), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-28", "pod":"coredns-668d6bf9bc-sbvzm", "timestamp":"2025-10-29 23:32:16.956100665 +0000 UTC"}, Hostname:"ip-172-31-30-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:16.957 [INFO][4935] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.001 [INFO][4935] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.001 [INFO][4935] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-28' Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.040 [INFO][4935] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" host="ip-172-31-30-28" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.063 [INFO][4935] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-28" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.080 [INFO][4935] ipam/ipam.go 511: Trying affinity for 192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.084 [INFO][4935] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.093 [INFO][4935] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.093 [INFO][4935] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" host="ip-172-31-30-28" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.103 [INFO][4935] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.116 [INFO][4935] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" host="ip-172-31-30-28" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.133 [INFO][4935] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.195/26] block=192.168.33.192/26 handle="k8s-pod-network.0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" host="ip-172-31-30-28" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.133 [INFO][4935] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.195/26] handle="k8s-pod-network.0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" host="ip-172-31-30-28" Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.133 [INFO][4935] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:17.209464 containerd[2015]: 2025-10-29 23:32:17.133 [INFO][4935] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.195/26] IPv6=[] ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" HandleID="k8s-pod-network.0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Workload="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" Oct 29 23:32:17.212983 containerd[2015]: 2025-10-29 23:32:17.139 [INFO][4904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-sbvzm" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"80643d69-11b6-49e9-90f8-d5d9f7cd74e5", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"", Pod:"coredns-668d6bf9bc-sbvzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81ca7fa42cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:17.212983 containerd[2015]: 2025-10-29 23:32:17.140 [INFO][4904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.195/32] ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-sbvzm" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" Oct 29 23:32:17.212983 containerd[2015]: 2025-10-29 23:32:17.140 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81ca7fa42cc ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-sbvzm" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" Oct 29 23:32:17.212983 containerd[2015]: 2025-10-29 23:32:17.149 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-sbvzm" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" Oct 29 23:32:17.212983 containerd[2015]: 2025-10-29 23:32:17.151 [INFO][4904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-sbvzm" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"80643d69-11b6-49e9-90f8-d5d9f7cd74e5", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec", Pod:"coredns-668d6bf9bc-sbvzm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.33.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81ca7fa42cc", MAC:"52:e2:48:66:38:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:17.212983 containerd[2015]: 2025-10-29 23:32:17.194 [INFO][4904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" Namespace="kube-system" Pod="coredns-668d6bf9bc-sbvzm" WorkloadEndpoint="ip--172--31--30--28-k8s-coredns--668d6bf9bc--sbvzm-eth0" Oct 29 23:32:17.231177 systemd[1]: Started cri-containerd-341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5.scope - libcontainer container 341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5. Oct 29 23:32:17.296821 containerd[2015]: time="2025-10-29T23:32:17.296079770Z" level=info msg="connecting to shim 0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec" address="unix:///run/containerd/s/93a195dd6f9e6292fcfca02a60d6876c28a179603273aab343fc1aa3a107c60c" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:17.366610 systemd[1]: Started cri-containerd-0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec.scope - libcontainer container 0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec. Oct 29 23:32:17.390851 containerd[2015]: time="2025-10-29T23:32:17.390792927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9m5c,Uid:76f75cd3-d56c-4449-8d61-d2a43bd411a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5\"" Oct 29 23:32:17.400263 containerd[2015]: time="2025-10-29T23:32:17.398903331Z" level=info msg="CreateContainer within sandbox \"341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 23:32:17.420988 containerd[2015]: time="2025-10-29T23:32:17.420800079Z" level=info msg="Container ec6aae83357cd39fd0348db366fcef154b836a521c49bacf2f4d5d96df884278: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:17.438417 containerd[2015]: time="2025-10-29T23:32:17.438330819Z" level=info msg="CreateContainer within sandbox \"341af1ec5dd0cd33424c3e7de563e71fd650f5666662c1b7b1ee8aafbee251c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec6aae83357cd39fd0348db366fcef154b836a521c49bacf2f4d5d96df884278\"" Oct 29 23:32:17.441458 containerd[2015]: time="2025-10-29T23:32:17.441373107Z" level=info msg="StartContainer for \"ec6aae83357cd39fd0348db366fcef154b836a521c49bacf2f4d5d96df884278\"" Oct 29 23:32:17.447531 containerd[2015]: time="2025-10-29T23:32:17.447243207Z" level=info msg="connecting to shim ec6aae83357cd39fd0348db366fcef154b836a521c49bacf2f4d5d96df884278" address="unix:///run/containerd/s/2f1515011b836beccd49c6a9927154ecfc3e871a7d3eb82b435724b34784c02a" protocol=ttrpc version=3 Oct 29 23:32:17.499258 containerd[2015]: time="2025-10-29T23:32:17.499201275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sbvzm,Uid:80643d69-11b6-49e9-90f8-d5d9f7cd74e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec\"" Oct 29 23:32:17.513358 containerd[2015]: time="2025-10-29T23:32:17.513309699Z" level=info msg="CreateContainer within sandbox \"0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 23:32:17.515075 systemd[1]: Started cri-containerd-ec6aae83357cd39fd0348db366fcef154b836a521c49bacf2f4d5d96df884278.scope - libcontainer container ec6aae83357cd39fd0348db366fcef154b836a521c49bacf2f4d5d96df884278. Oct 29 23:32:17.534530 containerd[2015]: time="2025-10-29T23:32:17.534383103Z" level=info msg="Container ddc477c55007cbe25ed42f700996ed1f70f6139b760a17be80147a19818dc763: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:32:17.547472 containerd[2015]: time="2025-10-29T23:32:17.547231300Z" level=info msg="CreateContainer within sandbox \"0c1d5c859c4be36de823157a4f9d7d5cd07cf1f4da1b97568d455d23b46267ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddc477c55007cbe25ed42f700996ed1f70f6139b760a17be80147a19818dc763\"" Oct 29 23:32:17.550502 containerd[2015]: time="2025-10-29T23:32:17.550436164Z" level=info msg="StartContainer for \"ddc477c55007cbe25ed42f700996ed1f70f6139b760a17be80147a19818dc763\"" Oct 29 23:32:17.554120 containerd[2015]: time="2025-10-29T23:32:17.553994788Z" level=info msg="connecting to shim ddc477c55007cbe25ed42f700996ed1f70f6139b760a17be80147a19818dc763" address="unix:///run/containerd/s/93a195dd6f9e6292fcfca02a60d6876c28a179603273aab343fc1aa3a107c60c" protocol=ttrpc version=3 Oct 29 23:32:17.605270 systemd[1]: Started cri-containerd-ddc477c55007cbe25ed42f700996ed1f70f6139b760a17be80147a19818dc763.scope - libcontainer container ddc477c55007cbe25ed42f700996ed1f70f6139b760a17be80147a19818dc763. Oct 29 23:32:17.614470 containerd[2015]: time="2025-10-29T23:32:17.614196532Z" level=info msg="StartContainer for \"ec6aae83357cd39fd0348db366fcef154b836a521c49bacf2f4d5d96df884278\" returns successfully" Oct 29 23:32:17.686138 containerd[2015]: time="2025-10-29T23:32:17.686075368Z" level=info msg="StartContainer for \"ddc477c55007cbe25ed42f700996ed1f70f6139b760a17be80147a19818dc763\" returns successfully" Oct 29 23:32:17.700630 containerd[2015]: time="2025-10-29T23:32:17.700525288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd4c4c89-kqbtj,Uid:fa52b929-eb21-441a-b4e7-cea898f2ddc5,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:32:18.025472 systemd-networkd[1811]: calie49fa9308c7: Link UP Oct 29 23:32:18.027453 systemd-networkd[1811]: calie49fa9308c7: Gained carrier Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.860 [INFO][5116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0 calico-apiserver-7cfd4c4c89- calico-apiserver fa52b929-eb21-441a-b4e7-cea898f2ddc5 865 0 2025-10-29 23:31:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cfd4c4c89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-28 calico-apiserver-7cfd4c4c89-kqbtj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie49fa9308c7 [] [] }} ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-kqbtj" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.862 [INFO][5116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-kqbtj" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.929 [INFO][5132] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" HandleID="k8s-pod-network.7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Workload="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.929 [INFO][5132] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" HandleID="k8s-pod-network.7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Workload="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c98d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-28", "pod":"calico-apiserver-7cfd4c4c89-kqbtj", "timestamp":"2025-10-29 23:32:17.929333249 +0000 UTC"}, Hostname:"ip-172-31-30-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.929 [INFO][5132] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.929 [INFO][5132] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.929 [INFO][5132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-28' Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.960 [INFO][5132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" host="ip-172-31-30-28" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.967 [INFO][5132] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-28" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.975 [INFO][5132] ipam/ipam.go 511: Trying affinity for 192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.979 [INFO][5132] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.983 [INFO][5132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.984 [INFO][5132] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" host="ip-172-31-30-28" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.986 [INFO][5132] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:17.993 [INFO][5132] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" host="ip-172-31-30-28" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:18.012 [INFO][5132] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.196/26] block=192.168.33.192/26 handle="k8s-pod-network.7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" host="ip-172-31-30-28" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:18.013 [INFO][5132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.196/26] handle="k8s-pod-network.7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" host="ip-172-31-30-28" Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:18.013 [INFO][5132] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:18.065111 containerd[2015]: 2025-10-29 23:32:18.014 [INFO][5132] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.196/26] IPv6=[] ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" HandleID="k8s-pod-network.7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Workload="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" Oct 29 23:32:18.067562 containerd[2015]: 2025-10-29 23:32:18.019 [INFO][5116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-kqbtj" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0", GenerateName:"calico-apiserver-7cfd4c4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa52b929-eb21-441a-b4e7-cea898f2ddc5", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd4c4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"", Pod:"calico-apiserver-7cfd4c4c89-kqbtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie49fa9308c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:18.067562 containerd[2015]: 2025-10-29 23:32:18.020 [INFO][5116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.196/32] ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-kqbtj" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" Oct 29 23:32:18.067562 containerd[2015]: 2025-10-29 23:32:18.020 [INFO][5116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie49fa9308c7 ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-kqbtj" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" Oct 29 23:32:18.067562 containerd[2015]: 2025-10-29 23:32:18.028 [INFO][5116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-kqbtj" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" Oct 29 23:32:18.067562 containerd[2015]: 2025-10-29 23:32:18.032 [INFO][5116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-kqbtj" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0", GenerateName:"calico-apiserver-7cfd4c4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"fa52b929-eb21-441a-b4e7-cea898f2ddc5", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd4c4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa", Pod:"calico-apiserver-7cfd4c4c89-kqbtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie49fa9308c7", MAC:"b6:9a:9f:45:e6:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:18.067562 containerd[2015]: 2025-10-29 23:32:18.057 [INFO][5116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-kqbtj" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--kqbtj-eth0" Oct 29 23:32:18.154229 containerd[2015]: time="2025-10-29T23:32:18.153486459Z" level=info msg="connecting to shim 7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa" address="unix:///run/containerd/s/21f0f06a2ffb1d58e71a17ad41c6b9098cc2c77df699a71d8cf38d0891718327" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:18.250314 systemd[1]: Started cri-containerd-7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa.scope - libcontainer container 7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa. Oct 29 23:32:18.267041 kubelet[3545]: I1029 23:32:18.266034 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sbvzm" podStartSLOduration=55.266003463 podStartE2EDuration="55.266003463s" podCreationTimestamp="2025-10-29 23:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:32:18.264725667 +0000 UTC m=+58.859863541" watchObservedRunningTime="2025-10-29 23:32:18.266003463 +0000 UTC m=+58.861141061" Oct 29 23:32:18.267041 kubelet[3545]: I1029 23:32:18.266491 3545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t9m5c" podStartSLOduration=55.266477763 podStartE2EDuration="55.266477763s" podCreationTimestamp="2025-10-29 23:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 23:32:18.189097683 +0000 UTC m=+58.784235281" watchObservedRunningTime="2025-10-29 23:32:18.266477763 +0000 UTC m=+58.861615385" Oct 29 23:32:18.497968 systemd-networkd[1811]: cali81ca7fa42cc: Gained IPv6LL Oct 29 23:32:18.549268 containerd[2015]: time="2025-10-29T23:32:18.549177736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd4c4c89-kqbtj,Uid:fa52b929-eb21-441a-b4e7-cea898f2ddc5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7199fea2dba81fac68b8d344d62df107c9b3f7c9748257ec5500937843ef25fa\"" Oct 29 23:32:18.555185 containerd[2015]: time="2025-10-29T23:32:18.554900429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:18.818062 systemd-networkd[1811]: cali2fc2753d94d: Gained IPv6LL Oct 29 23:32:18.912998 containerd[2015]: time="2025-10-29T23:32:18.912940818Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:18.916188 containerd[2015]: time="2025-10-29T23:32:18.916025910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:18.916188 containerd[2015]: time="2025-10-29T23:32:18.916149882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:18.916671 kubelet[3545]: E1029 23:32:18.916586 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:18.916801 kubelet[3545]: E1029 23:32:18.916702 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:18.917306 kubelet[3545]: E1029 23:32:18.916918 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgclx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd4c4c89-kqbtj_calico-apiserver(fa52b929-eb21-441a-b4e7-cea898f2ddc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:18.919001 kubelet[3545]: E1029 23:32:18.918772 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:32:18.939308 systemd[1]: Started sshd@10-172.31.30.28:22-139.178.89.65:42788.service - OpenSSH per-connection server daemon (139.178.89.65:42788). Oct 29 23:32:19.155401 sshd[5198]: Accepted publickey for core from 139.178.89.65 port 42788 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:19.160700 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:19.163490 kubelet[3545]: E1029 23:32:19.163311 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:32:19.175425 systemd-logind[1985]: New session 11 of user core. Oct 29 23:32:19.181967 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 29 23:32:19.330004 systemd-networkd[1811]: calie49fa9308c7: Gained IPv6LL Oct 29 23:32:19.469189 sshd[5201]: Connection closed by 139.178.89.65 port 42788 Oct 29 23:32:19.470286 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:19.479039 systemd[1]: sshd@10-172.31.30.28:22-139.178.89.65:42788.service: Deactivated successfully. Oct 29 23:32:19.487031 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 23:32:19.489101 systemd-logind[1985]: Session 11 logged out. Waiting for processes to exit. Oct 29 23:32:19.493975 systemd-logind[1985]: Removed session 11. Oct 29 23:32:19.697936 containerd[2015]: time="2025-10-29T23:32:19.697264722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68d4cfbc-mbbhz,Uid:a76d1c5c-b32f-4f1f-b7bc-93c38286ef75,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:19.967001 systemd-networkd[1811]: cali2a2ba4854c1: Link UP Oct 29 23:32:19.969846 systemd-networkd[1811]: cali2a2ba4854c1: Gained carrier Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.781 [INFO][5215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0 calico-kube-controllers-5f68d4cfbc- calico-system a76d1c5c-b32f-4f1f-b7bc-93c38286ef75 861 0 2025-10-29 23:31:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f68d4cfbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-28 calico-kube-controllers-5f68d4cfbc-mbbhz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2a2ba4854c1 [] [] }} ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Namespace="calico-system" Pod="calico-kube-controllers-5f68d4cfbc-mbbhz" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.781 [INFO][5215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Namespace="calico-system" Pod="calico-kube-controllers-5f68d4cfbc-mbbhz" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.837 [INFO][5229] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" HandleID="k8s-pod-network.8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Workload="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.837 [INFO][5229] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" HandleID="k8s-pod-network.8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Workload="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-28", "pod":"calico-kube-controllers-5f68d4cfbc-mbbhz", "timestamp":"2025-10-29 23:32:19.837695995 +0000 UTC"}, Hostname:"ip-172-31-30-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.838 [INFO][5229] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.838 [INFO][5229] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.838 [INFO][5229] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-28' Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.852 [INFO][5229] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" host="ip-172-31-30-28" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.871 [INFO][5229] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-28" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.888 [INFO][5229] ipam/ipam.go 511: Trying affinity for 192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.894 [INFO][5229] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.903 [INFO][5229] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.904 [INFO][5229] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" host="ip-172-31-30-28" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.915 [INFO][5229] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.934 [INFO][5229] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" host="ip-172-31-30-28" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.955 [INFO][5229] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.197/26] block=192.168.33.192/26 handle="k8s-pod-network.8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" host="ip-172-31-30-28" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.957 [INFO][5229] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.197/26] handle="k8s-pod-network.8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" host="ip-172-31-30-28" Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.957 [INFO][5229] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:20.001698 containerd[2015]: 2025-10-29 23:32:19.957 [INFO][5229] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.197/26] IPv6=[] ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" HandleID="k8s-pod-network.8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Workload="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" Oct 29 23:32:20.005215 containerd[2015]: 2025-10-29 23:32:19.961 [INFO][5215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Namespace="calico-system" Pod="calico-kube-controllers-5f68d4cfbc-mbbhz" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0", GenerateName:"calico-kube-controllers-5f68d4cfbc-", Namespace:"calico-system", SelfLink:"", UID:"a76d1c5c-b32f-4f1f-b7bc-93c38286ef75", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68d4cfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"", Pod:"calico-kube-controllers-5f68d4cfbc-mbbhz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2a2ba4854c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:20.005215 containerd[2015]: 2025-10-29 23:32:19.961 [INFO][5215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.197/32] ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Namespace="calico-system" Pod="calico-kube-controllers-5f68d4cfbc-mbbhz" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" Oct 29 23:32:20.005215 containerd[2015]: 2025-10-29 23:32:19.961 [INFO][5215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a2ba4854c1 ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Namespace="calico-system" Pod="calico-kube-controllers-5f68d4cfbc-mbbhz" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" Oct 29 23:32:20.005215 containerd[2015]: 2025-10-29 23:32:19.970 [INFO][5215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Namespace="calico-system" Pod="calico-kube-controllers-5f68d4cfbc-mbbhz" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" Oct 29 23:32:20.005215 containerd[2015]: 2025-10-29 23:32:19.971 [INFO][5215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Namespace="calico-system" Pod="calico-kube-controllers-5f68d4cfbc-mbbhz" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0", GenerateName:"calico-kube-controllers-5f68d4cfbc-", Namespace:"calico-system", SelfLink:"", UID:"a76d1c5c-b32f-4f1f-b7bc-93c38286ef75", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f68d4cfbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c", Pod:"calico-kube-controllers-5f68d4cfbc-mbbhz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.33.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2a2ba4854c1", MAC:"16:88:8d:77:44:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:20.005215 containerd[2015]: 2025-10-29 23:32:19.994 [INFO][5215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" Namespace="calico-system" Pod="calico-kube-controllers-5f68d4cfbc-mbbhz" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--kube--controllers--5f68d4cfbc--mbbhz-eth0" Oct 29 23:32:20.062474 containerd[2015]: time="2025-10-29T23:32:20.062396512Z" level=info msg="connecting to shim 8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c" address="unix:///run/containerd/s/afdfb8cdc159b5bef99e7ab8ff46886e77db0804c8a05c2c1ca686ade38b1b0d" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:20.129292 systemd[1]: Started cri-containerd-8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c.scope - libcontainer container 8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c. Oct 29 23:32:20.170743 kubelet[3545]: E1029 23:32:20.170687 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:32:20.268989 containerd[2015]: time="2025-10-29T23:32:20.268821329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f68d4cfbc-mbbhz,Uid:a76d1c5c-b32f-4f1f-b7bc-93c38286ef75,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f296dd568e0d81df4bf121b95de39e1ec86d0e315392dbc8dc2650a2cf3a38c\"" Oct 29 23:32:20.275167 containerd[2015]: time="2025-10-29T23:32:20.274849241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 23:32:20.542800 containerd[2015]: time="2025-10-29T23:32:20.542633910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:20.545207 containerd[2015]: time="2025-10-29T23:32:20.545144406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 23:32:20.545338 containerd[2015]: time="2025-10-29T23:32:20.545272074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 23:32:20.545628 kubelet[3545]: E1029 23:32:20.545568 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:32:20.545925 kubelet[3545]: E1029 23:32:20.545637 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:32:20.546013 kubelet[3545]: E1029 23:32:20.545878 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbnrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f68d4cfbc-mbbhz_calico-system(a76d1c5c-b32f-4f1f-b7bc-93c38286ef75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:20.547203 kubelet[3545]: E1029 23:32:20.547143 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:32:20.697817 containerd[2015]: time="2025-10-29T23:32:20.697744471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lq8fh,Uid:0e56094f-29e3-42d4-b70d-e871179d5468,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:20.698381 containerd[2015]: time="2025-10-29T23:32:20.698339455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm5cb,Uid:a3348575-d754-476b-94b5-28b2df5efe85,Namespace:calico-system,Attempt:0,}" Oct 29 23:32:20.699590 containerd[2015]: time="2025-10-29T23:32:20.699542791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd4c4c89-58cj5,Uid:2788a03f-6870-4386-aa19-59a40e87a133,Namespace:calico-apiserver,Attempt:0,}" Oct 29 23:32:21.091407 systemd-networkd[1811]: cali915f4110a25: Link UP Oct 29 23:32:21.095735 systemd-networkd[1811]: cali915f4110a25: Gained carrier Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:20.858 [INFO][5296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0 goldmane-666569f655- calico-system 0e56094f-29e3-42d4-b70d-e871179d5468 864 0 2025-10-29 23:31:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-28 goldmane-666569f655-lq8fh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali915f4110a25 [] [] }} ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Namespace="calico-system" Pod="goldmane-666569f655-lq8fh" WorkloadEndpoint="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:20.858 [INFO][5296] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Namespace="calico-system" Pod="goldmane-666569f655-lq8fh" WorkloadEndpoint="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:20.953 [INFO][5335] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" HandleID="k8s-pod-network.ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Workload="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:20.955 [INFO][5335] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" HandleID="k8s-pod-network.ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Workload="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-28", "pod":"goldmane-666569f655-lq8fh", "timestamp":"2025-10-29 23:32:20.9535691 +0000 UTC"}, Hostname:"ip-172-31-30-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:20.955 [INFO][5335] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:20.955 [INFO][5335] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:20.955 [INFO][5335] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-28' Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:20.986 [INFO][5335] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" host="ip-172-31-30-28" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.001 [INFO][5335] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-28" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.012 [INFO][5335] ipam/ipam.go 511: Trying affinity for 192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.017 [INFO][5335] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.025 [INFO][5335] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.026 [INFO][5335] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" host="ip-172-31-30-28" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.030 [INFO][5335] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310 Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.050 [INFO][5335] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" host="ip-172-31-30-28" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.073 [INFO][5335] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.198/26] block=192.168.33.192/26 handle="k8s-pod-network.ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" host="ip-172-31-30-28" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.073 [INFO][5335] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.198/26] handle="k8s-pod-network.ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" host="ip-172-31-30-28" Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.074 [INFO][5335] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:21.140267 containerd[2015]: 2025-10-29 23:32:21.074 [INFO][5335] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.198/26] IPv6=[] ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" HandleID="k8s-pod-network.ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Workload="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" Oct 29 23:32:21.144235 containerd[2015]: 2025-10-29 23:32:21.080 [INFO][5296] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Namespace="calico-system" Pod="goldmane-666569f655-lq8fh" WorkloadEndpoint="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0e56094f-29e3-42d4-b70d-e871179d5468", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"", Pod:"goldmane-666569f655-lq8fh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali915f4110a25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:21.144235 containerd[2015]: 2025-10-29 23:32:21.081 [INFO][5296] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.198/32] ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Namespace="calico-system" Pod="goldmane-666569f655-lq8fh" WorkloadEndpoint="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" Oct 29 23:32:21.144235 containerd[2015]: 2025-10-29 23:32:21.082 [INFO][5296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali915f4110a25 ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Namespace="calico-system" Pod="goldmane-666569f655-lq8fh" WorkloadEndpoint="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" Oct 29 23:32:21.144235 containerd[2015]: 2025-10-29 23:32:21.098 [INFO][5296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Namespace="calico-system" Pod="goldmane-666569f655-lq8fh" WorkloadEndpoint="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" Oct 29 23:32:21.144235 containerd[2015]: 2025-10-29 23:32:21.100 [INFO][5296] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Namespace="calico-system" Pod="goldmane-666569f655-lq8fh" WorkloadEndpoint="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0e56094f-29e3-42d4-b70d-e871179d5468", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310", Pod:"goldmane-666569f655-lq8fh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.33.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali915f4110a25", MAC:"aa:ee:5c:98:20:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:21.144235 containerd[2015]: 2025-10-29 23:32:21.129 [INFO][5296] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" Namespace="calico-system" Pod="goldmane-666569f655-lq8fh" WorkloadEndpoint="ip--172--31--30--28-k8s-goldmane--666569f655--lq8fh-eth0" Oct 29 23:32:21.182582 kubelet[3545]: E1029 23:32:21.182496 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:32:21.186729 systemd-networkd[1811]: cali2a2ba4854c1: Gained IPv6LL Oct 29 23:32:21.234146 containerd[2015]: time="2025-10-29T23:32:21.234087438Z" level=info msg="connecting to shim ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310" address="unix:///run/containerd/s/488a4f8c5a65f2059262364f3ee121f19af24207762b6097735fb130a5cd3dfc" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:21.287324 systemd-networkd[1811]: cali3b5d0c4b2d6: Link UP Oct 29 23:32:21.291519 systemd-networkd[1811]: cali3b5d0c4b2d6: Gained carrier Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:20.941 [INFO][5299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0 csi-node-driver- calico-system a3348575-d754-476b-94b5-28b2df5efe85 762 0 2025-10-29 23:31:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-28 csi-node-driver-wm5cb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3b5d0c4b2d6 [] [] }} ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Namespace="calico-system" Pod="csi-node-driver-wm5cb" WorkloadEndpoint="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:20.943 [INFO][5299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Namespace="calico-system" Pod="csi-node-driver-wm5cb" WorkloadEndpoint="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.065 [INFO][5349] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" HandleID="k8s-pod-network.6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Workload="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.065 [INFO][5349] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" HandleID="k8s-pod-network.6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Workload="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003686f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-28", "pod":"csi-node-driver-wm5cb", "timestamp":"2025-10-29 23:32:21.065105777 +0000 UTC"}, Hostname:"ip-172-31-30-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.065 [INFO][5349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.074 [INFO][5349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.074 [INFO][5349] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-28' Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.110 [INFO][5349] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" host="ip-172-31-30-28" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.122 [INFO][5349] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-28" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.146 [INFO][5349] ipam/ipam.go 511: Trying affinity for 192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.152 [INFO][5349] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.160 [INFO][5349] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.161 [INFO][5349] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" host="ip-172-31-30-28" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.165 [INFO][5349] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450 Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.179 [INFO][5349] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" host="ip-172-31-30-28" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.240 [INFO][5349] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.199/26] block=192.168.33.192/26 handle="k8s-pod-network.6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" host="ip-172-31-30-28" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.240 [INFO][5349] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.199/26] handle="k8s-pod-network.6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" host="ip-172-31-30-28" Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.240 [INFO][5349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:21.347405 containerd[2015]: 2025-10-29 23:32:21.240 [INFO][5349] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.199/26] IPv6=[] ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" HandleID="k8s-pod-network.6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Workload="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" Oct 29 23:32:21.351918 containerd[2015]: 2025-10-29 23:32:21.264 [INFO][5299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Namespace="calico-system" Pod="csi-node-driver-wm5cb" WorkloadEndpoint="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3348575-d754-476b-94b5-28b2df5efe85", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"", Pod:"csi-node-driver-wm5cb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5d0c4b2d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:21.351918 containerd[2015]: 2025-10-29 23:32:21.265 [INFO][5299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.199/32] ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Namespace="calico-system" Pod="csi-node-driver-wm5cb" WorkloadEndpoint="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" Oct 29 23:32:21.351918 containerd[2015]: 2025-10-29 23:32:21.265 [INFO][5299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b5d0c4b2d6 ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Namespace="calico-system" Pod="csi-node-driver-wm5cb" WorkloadEndpoint="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" Oct 29 23:32:21.351918 containerd[2015]: 2025-10-29 23:32:21.297 [INFO][5299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Namespace="calico-system" Pod="csi-node-driver-wm5cb" WorkloadEndpoint="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" Oct 29 23:32:21.351918 containerd[2015]: 2025-10-29 23:32:21.300 [INFO][5299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Namespace="calico-system" Pod="csi-node-driver-wm5cb" WorkloadEndpoint="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3348575-d754-476b-94b5-28b2df5efe85", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450", Pod:"csi-node-driver-wm5cb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.33.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b5d0c4b2d6", MAC:"be:39:1c:f6:44:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:21.351918 containerd[2015]: 2025-10-29 23:32:21.327 [INFO][5299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" Namespace="calico-system" Pod="csi-node-driver-wm5cb" WorkloadEndpoint="ip--172--31--30--28-k8s-csi--node--driver--wm5cb-eth0" Oct 29 23:32:21.399978 systemd[1]: Started cri-containerd-ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310.scope - libcontainer container ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310. Oct 29 23:32:21.449130 containerd[2015]: time="2025-10-29T23:32:21.448781047Z" level=info msg="connecting to shim 6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450" address="unix:///run/containerd/s/c18de528f7c80d3feb66b53ba2042ac34049b02bed6d9ae5500b0c081f401233" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:21.489147 systemd-networkd[1811]: caliad4ef9bbf32: Link UP Oct 29 23:32:21.493627 systemd-networkd[1811]: caliad4ef9bbf32: Gained carrier Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:20.922 [INFO][5313] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0 calico-apiserver-7cfd4c4c89- calico-apiserver 2788a03f-6870-4386-aa19-59a40e87a133 863 0 2025-10-29 23:31:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cfd4c4c89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-28 calico-apiserver-7cfd4c4c89-58cj5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad4ef9bbf32 [] [] }} ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-58cj5" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:20.923 [INFO][5313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-58cj5" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.078 [INFO][5345] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" HandleID="k8s-pod-network.10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Workload="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.078 [INFO][5345] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" HandleID="k8s-pod-network.10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Workload="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002637c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-28", "pod":"calico-apiserver-7cfd4c4c89-58cj5", "timestamp":"2025-10-29 23:32:21.078206033 +0000 UTC"}, Hostname:"ip-172-31-30-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.078 [INFO][5345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.240 [INFO][5345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.241 [INFO][5345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-28' Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.308 [INFO][5345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" host="ip-172-31-30-28" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.342 [INFO][5345] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-28" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.371 [INFO][5345] ipam/ipam.go 511: Trying affinity for 192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.382 [INFO][5345] ipam/ipam.go 158: Attempting to load block cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.397 [INFO][5345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.33.192/26 host="ip-172-31-30-28" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.398 [INFO][5345] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.33.192/26 handle="k8s-pod-network.10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" host="ip-172-31-30-28" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.415 [INFO][5345] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.432 [INFO][5345] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.33.192/26 handle="k8s-pod-network.10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" host="ip-172-31-30-28" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.451 [INFO][5345] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.33.200/26] block=192.168.33.192/26 handle="k8s-pod-network.10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" host="ip-172-31-30-28" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.451 [INFO][5345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.33.200/26] handle="k8s-pod-network.10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" host="ip-172-31-30-28" Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.452 [INFO][5345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 23:32:21.543696 containerd[2015]: 2025-10-29 23:32:21.452 [INFO][5345] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.33.200/26] IPv6=[] ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" HandleID="k8s-pod-network.10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Workload="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" Oct 29 23:32:21.546072 containerd[2015]: 2025-10-29 23:32:21.462 [INFO][5313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-58cj5" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0", GenerateName:"calico-apiserver-7cfd4c4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"2788a03f-6870-4386-aa19-59a40e87a133", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd4c4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"", Pod:"calico-apiserver-7cfd4c4c89-58cj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad4ef9bbf32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:21.546072 containerd[2015]: 2025-10-29 23:32:21.463 [INFO][5313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.33.200/32] ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-58cj5" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" Oct 29 23:32:21.546072 containerd[2015]: 2025-10-29 23:32:21.463 [INFO][5313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad4ef9bbf32 ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-58cj5" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" Oct 29 23:32:21.546072 containerd[2015]: 2025-10-29 23:32:21.492 [INFO][5313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-58cj5" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" Oct 29 23:32:21.546072 containerd[2015]: 2025-10-29 23:32:21.498 [INFO][5313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-58cj5" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0", GenerateName:"calico-apiserver-7cfd4c4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"2788a03f-6870-4386-aa19-59a40e87a133", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 23, 31, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd4c4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-28", ContainerID:"10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c", Pod:"calico-apiserver-7cfd4c4c89-58cj5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.33.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad4ef9bbf32", MAC:"0e:73:fa:3f:d7:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 23:32:21.546072 containerd[2015]: 2025-10-29 23:32:21.534 [INFO][5313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd4c4c89-58cj5" WorkloadEndpoint="ip--172--31--30--28-k8s-calico--apiserver--7cfd4c4c89--58cj5-eth0" Oct 29 23:32:21.583943 systemd[1]: Started cri-containerd-6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450.scope - libcontainer container 6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450. Oct 29 23:32:21.641876 containerd[2015]: time="2025-10-29T23:32:21.641709704Z" level=info msg="connecting to shim 10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c" address="unix:///run/containerd/s/8131863053238af9bd0fc517712dc5c11e44865b4f16c43fc71238ff71bc1f3b" namespace=k8s.io protocol=ttrpc version=3 Oct 29 23:32:21.693975 containerd[2015]: time="2025-10-29T23:32:21.693704552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lq8fh,Uid:0e56094f-29e3-42d4-b70d-e871179d5468,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad1e4a8ab97b7ac2d07aeef99b06a7055bc5b26a313f1d4a01545cd18b78c310\"" Oct 29 23:32:21.704904 containerd[2015]: time="2025-10-29T23:32:21.704752808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 23:32:21.743710 containerd[2015]: time="2025-10-29T23:32:21.742640348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wm5cb,Uid:a3348575-d754-476b-94b5-28b2df5efe85,Namespace:calico-system,Attempt:0,} returns sandbox id \"6627a2a316f8a10fa8dff804fb1fb046f53ac0677d06d66f760fe6be90204450\"" Oct 29 23:32:21.752082 systemd[1]: Started cri-containerd-10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c.scope - libcontainer container 10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c. Oct 29 23:32:21.836965 containerd[2015]: time="2025-10-29T23:32:21.836848017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd4c4c89-58cj5,Uid:2788a03f-6870-4386-aa19-59a40e87a133,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"10607f86fcc7af1fae2870a4133682501898720e765049f67c04b1acb4827c3c\"" Oct 29 23:32:22.005670 containerd[2015]: time="2025-10-29T23:32:22.005589750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:22.007812 containerd[2015]: time="2025-10-29T23:32:22.007757922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 23:32:22.008004 containerd[2015]: time="2025-10-29T23:32:22.007880286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:22.008247 kubelet[3545]: E1029 23:32:22.008196 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:22.008385 kubelet[3545]: E1029 23:32:22.008356 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:22.009177 kubelet[3545]: E1029 23:32:22.008748 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx6pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lq8fh_calico-system(0e56094f-29e3-42d4-b70d-e871179d5468): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:22.010246 kubelet[3545]: E1029 23:32:22.010091 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:32:22.011027 containerd[2015]: time="2025-10-29T23:32:22.010947606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 23:32:22.206765 kubelet[3545]: E1029 23:32:22.195773 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:32:22.206765 kubelet[3545]: E1029 23:32:22.201337 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:32:22.321258 containerd[2015]: time="2025-10-29T23:32:22.320994451Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:22.324096 containerd[2015]: time="2025-10-29T23:32:22.323915635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 23:32:22.324096 containerd[2015]: time="2025-10-29T23:32:22.324056575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 23:32:22.324590 kubelet[3545]: E1029 23:32:22.324434 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:32:22.324590 kubelet[3545]: E1029 23:32:22.324498 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:32:22.325081 kubelet[3545]: E1029 23:32:22.324849 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:22.325874 containerd[2015]: time="2025-10-29T23:32:22.325298695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:22.465936 systemd-networkd[1811]: cali915f4110a25: Gained IPv6LL Oct 29 23:32:22.601563 containerd[2015]: time="2025-10-29T23:32:22.601404393Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:22.603642 containerd[2015]: time="2025-10-29T23:32:22.603574401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:22.603794 containerd[2015]: time="2025-10-29T23:32:22.603716025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:22.604028 kubelet[3545]: E1029 23:32:22.603958 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:22.604113 kubelet[3545]: E1029 23:32:22.604028 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:22.604406 kubelet[3545]: E1029 23:32:22.604312 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8mh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd4c4c89-58cj5_calico-apiserver(2788a03f-6870-4386-aa19-59a40e87a133): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:22.605813 kubelet[3545]: E1029 23:32:22.605737 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:32:22.607130 containerd[2015]: time="2025-10-29T23:32:22.606409713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 23:32:22.721987 systemd-networkd[1811]: cali3b5d0c4b2d6: Gained IPv6LL Oct 29 23:32:22.891391 containerd[2015]: time="2025-10-29T23:32:22.891233350Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:22.894444 containerd[2015]: time="2025-10-29T23:32:22.894333922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 23:32:22.894612 containerd[2015]: time="2025-10-29T23:32:22.894470614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 23:32:22.895171 kubelet[3545]: E1029 23:32:22.894970 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:32:22.895544 kubelet[3545]: E1029 23:32:22.895330 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:32:22.896748 kubelet[3545]: E1029 23:32:22.896461 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:22.897990 kubelet[3545]: E1029 23:32:22.897879 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:32:23.204724 kubelet[3545]: E1029 23:32:23.204333 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:32:23.204724 kubelet[3545]: E1029 23:32:23.204583 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:32:23.206717 kubelet[3545]: E1029 23:32:23.206575 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:32:23.489957 systemd-networkd[1811]: caliad4ef9bbf32: Gained IPv6LL Oct 29 23:32:24.529518 systemd[1]: Started sshd@11-172.31.30.28:22-139.178.89.65:42800.service - OpenSSH per-connection server daemon (139.178.89.65:42800). Oct 29 23:32:24.725153 sshd[5534]: Accepted publickey for core from 139.178.89.65 port 42800 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:24.728851 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:24.737804 systemd-logind[1985]: New session 12 of user core. Oct 29 23:32:24.743937 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 29 23:32:25.023243 sshd[5537]: Connection closed by 139.178.89.65 port 42800 Oct 29 23:32:25.023966 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:25.030928 systemd[1]: sshd@11-172.31.30.28:22-139.178.89.65:42800.service: Deactivated successfully. Oct 29 23:32:25.036320 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 23:32:25.040284 systemd-logind[1985]: Session 12 logged out. Waiting for processes to exit. Oct 29 23:32:25.043762 systemd-logind[1985]: Removed session 12. Oct 29 23:32:25.063233 systemd[1]: Started sshd@12-172.31.30.28:22-139.178.89.65:42808.service - OpenSSH per-connection server daemon (139.178.89.65:42808). Oct 29 23:32:25.257954 sshd[5552]: Accepted publickey for core from 139.178.89.65 port 42808 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:25.260336 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:25.269142 systemd-logind[1985]: New session 13 of user core. Oct 29 23:32:25.278058 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 29 23:32:25.654000 sshd[5555]: Connection closed by 139.178.89.65 port 42808 Oct 29 23:32:25.653788 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:25.668776 systemd[1]: sshd@12-172.31.30.28:22-139.178.89.65:42808.service: Deactivated successfully. Oct 29 23:32:25.675937 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 23:32:25.685042 systemd-logind[1985]: Session 13 logged out. Waiting for processes to exit. Oct 29 23:32:25.709122 systemd[1]: Started sshd@13-172.31.30.28:22-139.178.89.65:42814.service - OpenSSH per-connection server daemon (139.178.89.65:42814). Oct 29 23:32:25.716638 systemd-logind[1985]: Removed session 13. Oct 29 23:32:25.732431 ntpd[2210]: Listen normally on 6 vxlan.calico 192.168.33.192:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 6 vxlan.calico 192.168.33.192:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 7 cali6d73300dc68 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 8 vxlan.calico [fe80::6413:6dff:fef2:4ff9%5]:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 9 cali2fc2753d94d [fe80::ecee:eeff:feee:eeee%8]:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 10 cali81ca7fa42cc [fe80::ecee:eeff:feee:eeee%9]:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 11 calie49fa9308c7 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 12 cali2a2ba4854c1 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 13 cali915f4110a25 [fe80::ecee:eeff:feee:eeee%12]:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 14 cali3b5d0c4b2d6 [fe80::ecee:eeff:feee:eeee%13]:123 Oct 29 23:32:25.736626 ntpd[2210]: 29 Oct 23:32:25 ntpd[2210]: Listen normally on 15 caliad4ef9bbf32 [fe80::ecee:eeff:feee:eeee%14]:123 Oct 29 23:32:25.734858 ntpd[2210]: Listen normally on 7 cali6d73300dc68 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 29 23:32:25.734909 ntpd[2210]: Listen normally on 8 vxlan.calico [fe80::6413:6dff:fef2:4ff9%5]:123 Oct 29 23:32:25.734954 ntpd[2210]: Listen normally on 9 cali2fc2753d94d [fe80::ecee:eeff:feee:eeee%8]:123 Oct 29 23:32:25.734998 ntpd[2210]: Listen normally on 10 cali81ca7fa42cc [fe80::ecee:eeff:feee:eeee%9]:123 Oct 29 23:32:25.735048 ntpd[2210]: Listen normally on 11 calie49fa9308c7 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 29 23:32:25.735090 ntpd[2210]: Listen normally on 12 cali2a2ba4854c1 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 29 23:32:25.735133 ntpd[2210]: Listen normally on 13 cali915f4110a25 [fe80::ecee:eeff:feee:eeee%12]:123 Oct 29 23:32:25.735175 ntpd[2210]: Listen normally on 14 cali3b5d0c4b2d6 [fe80::ecee:eeff:feee:eeee%13]:123 Oct 29 23:32:25.735219 ntpd[2210]: Listen normally on 15 caliad4ef9bbf32 [fe80::ecee:eeff:feee:eeee%14]:123 Oct 29 23:32:25.921816 sshd[5565]: Accepted publickey for core from 139.178.89.65 port 42814 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:25.924809 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:25.934452 systemd-logind[1985]: New session 14 of user core. Oct 29 23:32:25.943907 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 29 23:32:26.194495 sshd[5574]: Connection closed by 139.178.89.65 port 42814 Oct 29 23:32:26.195384 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:26.202791 systemd[1]: sshd@13-172.31.30.28:22-139.178.89.65:42814.service: Deactivated successfully. Oct 29 23:32:26.206561 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 23:32:26.212912 systemd-logind[1985]: Session 14 logged out. Waiting for processes to exit. Oct 29 23:32:26.215644 systemd-logind[1985]: Removed session 14. Oct 29 23:32:28.698819 containerd[2015]: time="2025-10-29T23:32:28.698424027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 23:32:28.966133 containerd[2015]: time="2025-10-29T23:32:28.965677420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:28.968615 containerd[2015]: time="2025-10-29T23:32:28.968546464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 23:32:28.968779 containerd[2015]: time="2025-10-29T23:32:28.968686372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 23:32:28.968954 kubelet[3545]: E1029 23:32:28.968904 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:28.970253 kubelet[3545]: E1029 23:32:28.968965 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:28.970253 kubelet[3545]: E1029 23:32:28.969123 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:752c5f95daab45aab789fd633a80c4d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c7fdf84f6-glqpf_calico-system(e0ab4b8c-c86a-446f-bf29-1179e47cdecc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:28.975496 containerd[2015]: time="2025-10-29T23:32:28.974958940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 23:32:29.251322 containerd[2015]: time="2025-10-29T23:32:29.250830278Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:29.254100 containerd[2015]: time="2025-10-29T23:32:29.253934054Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 23:32:29.254100 containerd[2015]: time="2025-10-29T23:32:29.254057894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 23:32:29.254487 kubelet[3545]: E1029 23:32:29.254388 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:29.254487 kubelet[3545]: E1029 23:32:29.254455 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:29.254742 kubelet[3545]: E1029 23:32:29.254612 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c7fdf84f6-glqpf_calico-system(e0ab4b8c-c86a-446f-bf29-1179e47cdecc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:29.256368 kubelet[3545]: E1029 23:32:29.256179 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:32:31.235232 systemd[1]: Started sshd@14-172.31.30.28:22-139.178.89.65:60062.service - OpenSSH per-connection server daemon (139.178.89.65:60062). Oct 29 23:32:31.441124 sshd[5591]: Accepted publickey for core from 139.178.89.65 port 60062 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:31.443481 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:31.452951 systemd-logind[1985]: New session 15 of user core. Oct 29 23:32:31.459976 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 29 23:32:31.725092 sshd[5595]: Connection closed by 139.178.89.65 port 60062 Oct 29 23:32:31.726114 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:31.734299 systemd-logind[1985]: Session 15 logged out. Waiting for processes to exit. Oct 29 23:32:31.736174 systemd[1]: sshd@14-172.31.30.28:22-139.178.89.65:60062.service: Deactivated successfully. Oct 29 23:32:31.740695 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 23:32:31.746359 systemd-logind[1985]: Removed session 15. Oct 29 23:32:33.700695 containerd[2015]: time="2025-10-29T23:32:33.700164884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:33.991697 containerd[2015]: time="2025-10-29T23:32:33.991386969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:33.994080 containerd[2015]: time="2025-10-29T23:32:33.993992673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:33.994347 containerd[2015]: time="2025-10-29T23:32:33.994223169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:33.994768 kubelet[3545]: E1029 23:32:33.994693 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:33.995988 kubelet[3545]: E1029 23:32:33.994768 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:33.995988 kubelet[3545]: E1029 23:32:33.995126 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgclx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd4c4c89-kqbtj_calico-apiserver(fa52b929-eb21-441a-b4e7-cea898f2ddc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:33.996216 containerd[2015]: time="2025-10-29T23:32:33.995194749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:32:33.997220 kubelet[3545]: E1029 23:32:33.996839 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:32:34.284557 containerd[2015]: time="2025-10-29T23:32:34.284301427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:34.286691 containerd[2015]: time="2025-10-29T23:32:34.286528591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:32:34.286691 containerd[2015]: time="2025-10-29T23:32:34.286576591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:34.287542 kubelet[3545]: E1029 23:32:34.286990 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:34.287542 kubelet[3545]: E1029 23:32:34.287053 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:32:34.287542 kubelet[3545]: E1029 23:32:34.287263 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8mh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd4c4c89-58cj5_calico-apiserver(2788a03f-6870-4386-aa19-59a40e87a133): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:34.289129 kubelet[3545]: E1029 23:32:34.289030 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:32:35.699606 containerd[2015]: time="2025-10-29T23:32:35.699173818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 23:32:35.964738 containerd[2015]: time="2025-10-29T23:32:35.964555523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:35.966944 containerd[2015]: time="2025-10-29T23:32:35.966869435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 23:32:35.967092 containerd[2015]: time="2025-10-29T23:32:35.966994703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 23:32:35.967432 kubelet[3545]: E1029 23:32:35.967383 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:35.968331 kubelet[3545]: E1029 23:32:35.968006 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:32:35.968331 kubelet[3545]: E1029 23:32:35.968210 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx6pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lq8fh_calico-system(0e56094f-29e3-42d4-b70d-e871179d5468): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:35.969881 kubelet[3545]: E1029 23:32:35.969462 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:32:36.765389 systemd[1]: Started sshd@15-172.31.30.28:22-139.178.89.65:45434.service - OpenSSH per-connection server daemon (139.178.89.65:45434). Oct 29 23:32:36.966990 sshd[5616]: Accepted publickey for core from 139.178.89.65 port 45434 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:36.969747 sshd-session[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:36.979102 systemd-logind[1985]: New session 16 of user core. Oct 29 23:32:36.984936 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 29 23:32:37.237308 sshd[5619]: Connection closed by 139.178.89.65 port 45434 Oct 29 23:32:37.238106 sshd-session[5616]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:37.245361 systemd[1]: sshd@15-172.31.30.28:22-139.178.89.65:45434.service: Deactivated successfully. Oct 29 23:32:37.251210 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 23:32:37.253191 systemd-logind[1985]: Session 16 logged out. Waiting for processes to exit. Oct 29 23:32:37.256804 systemd-logind[1985]: Removed session 16. Oct 29 23:32:37.702701 containerd[2015]: time="2025-10-29T23:32:37.702521964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 23:32:38.016279 containerd[2015]: time="2025-10-29T23:32:38.016084521Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:38.019641 containerd[2015]: time="2025-10-29T23:32:38.019521321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 23:32:38.019641 containerd[2015]: time="2025-10-29T23:32:38.019601361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 23:32:38.020200 kubelet[3545]: E1029 23:32:38.020142 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:32:38.021507 kubelet[3545]: E1029 23:32:38.020704 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:32:38.021507 kubelet[3545]: E1029 23:32:38.020913 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbnrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f68d4cfbc-mbbhz_calico-system(a76d1c5c-b32f-4f1f-b7bc-93c38286ef75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:38.022689 kubelet[3545]: E1029 23:32:38.022572 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:32:38.700231 containerd[2015]: time="2025-10-29T23:32:38.699822661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 23:32:38.964760 containerd[2015]: time="2025-10-29T23:32:38.964409390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:38.967167 containerd[2015]: time="2025-10-29T23:32:38.967086206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 23:32:38.967357 containerd[2015]: time="2025-10-29T23:32:38.967235450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 23:32:38.968772 kubelet[3545]: E1029 23:32:38.968700 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:32:38.969213 kubelet[3545]: E1029 23:32:38.968933 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:32:38.969213 kubelet[3545]: E1029 23:32:38.969120 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:38.973191 containerd[2015]: time="2025-10-29T23:32:38.972813446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 23:32:39.243687 containerd[2015]: time="2025-10-29T23:32:39.243180623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:39.246145 containerd[2015]: time="2025-10-29T23:32:39.246068459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 23:32:39.246244 containerd[2015]: time="2025-10-29T23:32:39.246198419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 23:32:39.246568 kubelet[3545]: E1029 23:32:39.246488 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:32:39.247816 kubelet[3545]: E1029 23:32:39.246574 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:32:39.247816 kubelet[3545]: E1029 23:32:39.246782 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:39.248260 kubelet[3545]: E1029 23:32:39.248099 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:32:41.706670 kubelet[3545]: E1029 23:32:41.704970 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:32:42.275796 systemd[1]: Started sshd@16-172.31.30.28:22-139.178.89.65:45444.service - OpenSSH per-connection server daemon (139.178.89.65:45444). Oct 29 23:32:42.473116 sshd[5632]: Accepted publickey for core from 139.178.89.65 port 45444 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:42.476151 sshd-session[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:42.485797 systemd-logind[1985]: New session 17 of user core. Oct 29 23:32:42.495918 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 29 23:32:42.754403 sshd[5635]: Connection closed by 139.178.89.65 port 45444 Oct 29 23:32:42.754936 sshd-session[5632]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:42.763165 systemd[1]: sshd@16-172.31.30.28:22-139.178.89.65:45444.service: Deactivated successfully. Oct 29 23:32:42.768184 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 23:32:42.769951 systemd-logind[1985]: Session 17 logged out. Waiting for processes to exit. Oct 29 23:32:42.773781 systemd-logind[1985]: Removed session 17. Oct 29 23:32:43.255270 containerd[2015]: time="2025-10-29T23:32:43.255213615Z" level=info msg="TaskExit event in podsandbox handler container_id:\"148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a\" id:\"2a3ef3cb0246e2db9d2a46bcfefdb66026f9e2bfce44add837b3bd2ff03b3c9c\" pid:5658 exited_at:{seconds:1761780763 nanos:254166639}" Oct 29 23:32:47.701315 kubelet[3545]: E1029 23:32:47.701239 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:32:47.792947 systemd[1]: Started sshd@17-172.31.30.28:22-139.178.89.65:45692.service - OpenSSH per-connection server daemon (139.178.89.65:45692). Oct 29 23:32:48.002050 sshd[5673]: Accepted publickey for core from 139.178.89.65 port 45692 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:48.005691 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:48.020085 systemd-logind[1985]: New session 18 of user core. Oct 29 23:32:48.028968 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 29 23:32:48.370923 sshd[5676]: Connection closed by 139.178.89.65 port 45692 Oct 29 23:32:48.371995 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:48.383121 systemd[1]: sshd@17-172.31.30.28:22-139.178.89.65:45692.service: Deactivated successfully. Oct 29 23:32:48.393162 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 23:32:48.398080 systemd-logind[1985]: Session 18 logged out. Waiting for processes to exit. Oct 29 23:32:48.420121 systemd[1]: Started sshd@18-172.31.30.28:22-139.178.89.65:45702.service - OpenSSH per-connection server daemon (139.178.89.65:45702). Oct 29 23:32:48.423461 systemd-logind[1985]: Removed session 18. Oct 29 23:32:48.630440 sshd[5689]: Accepted publickey for core from 139.178.89.65 port 45702 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:48.633535 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:48.644350 systemd-logind[1985]: New session 19 of user core. Oct 29 23:32:48.652981 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 29 23:32:48.700790 kubelet[3545]: E1029 23:32:48.699966 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:32:49.204135 sshd[5692]: Connection closed by 139.178.89.65 port 45702 Oct 29 23:32:49.204961 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:49.215614 systemd[1]: sshd@18-172.31.30.28:22-139.178.89.65:45702.service: Deactivated successfully. Oct 29 23:32:49.221381 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 23:32:49.230985 systemd-logind[1985]: Session 19 logged out. Waiting for processes to exit. Oct 29 23:32:49.255315 systemd[1]: Started sshd@19-172.31.30.28:22-139.178.89.65:45718.service - OpenSSH per-connection server daemon (139.178.89.65:45718). Oct 29 23:32:49.264824 systemd-logind[1985]: Removed session 19. Oct 29 23:32:49.496598 sshd[5702]: Accepted publickey for core from 139.178.89.65 port 45718 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:49.498827 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:49.511832 systemd-logind[1985]: New session 20 of user core. Oct 29 23:32:49.518932 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 29 23:32:50.701123 kubelet[3545]: E1029 23:32:50.700886 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:32:50.831122 sshd[5705]: Connection closed by 139.178.89.65 port 45718 Oct 29 23:32:50.833583 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:50.848056 systemd-logind[1985]: Session 20 logged out. Waiting for processes to exit. Oct 29 23:32:50.851005 systemd[1]: sshd@19-172.31.30.28:22-139.178.89.65:45718.service: Deactivated successfully. Oct 29 23:32:50.856227 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 23:32:50.885264 systemd-logind[1985]: Removed session 20. Oct 29 23:32:50.886176 systemd[1]: Started sshd@20-172.31.30.28:22-139.178.89.65:45720.service - OpenSSH per-connection server daemon (139.178.89.65:45720). Oct 29 23:32:51.106343 sshd[5725]: Accepted publickey for core from 139.178.89.65 port 45720 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:51.109828 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:51.121752 systemd-logind[1985]: New session 21 of user core. Oct 29 23:32:51.129943 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 29 23:32:51.703430 kubelet[3545]: E1029 23:32:51.703350 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:32:51.704148 kubelet[3545]: E1029 23:32:51.704078 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:32:51.847266 sshd[5728]: Connection closed by 139.178.89.65 port 45720 Oct 29 23:32:51.848382 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:51.860345 systemd[1]: sshd@20-172.31.30.28:22-139.178.89.65:45720.service: Deactivated successfully. Oct 29 23:32:51.869980 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 23:32:51.874775 systemd-logind[1985]: Session 21 logged out. Waiting for processes to exit. Oct 29 23:32:51.894362 systemd[1]: Started sshd@21-172.31.30.28:22-139.178.89.65:45724.service - OpenSSH per-connection server daemon (139.178.89.65:45724). Oct 29 23:32:51.899154 systemd-logind[1985]: Removed session 21. Oct 29 23:32:52.108216 sshd[5739]: Accepted publickey for core from 139.178.89.65 port 45724 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:52.112477 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:52.126803 systemd-logind[1985]: New session 22 of user core. Oct 29 23:32:52.133983 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 29 23:32:52.418735 sshd[5742]: Connection closed by 139.178.89.65 port 45724 Oct 29 23:32:52.419604 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:52.431125 systemd[1]: sshd@21-172.31.30.28:22-139.178.89.65:45724.service: Deactivated successfully. Oct 29 23:32:52.437924 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 23:32:52.440454 systemd-logind[1985]: Session 22 logged out. Waiting for processes to exit. Oct 29 23:32:52.445244 systemd-logind[1985]: Removed session 22. Oct 29 23:32:54.699132 containerd[2015]: time="2025-10-29T23:32:54.698778376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 23:32:54.987671 containerd[2015]: time="2025-10-29T23:32:54.987449165Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:54.990284 containerd[2015]: time="2025-10-29T23:32:54.990079709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 23:32:54.990284 containerd[2015]: time="2025-10-29T23:32:54.990219821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 23:32:54.990520 kubelet[3545]: E1029 23:32:54.990428 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:54.990520 kubelet[3545]: E1029 23:32:54.990492 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:32:54.991684 kubelet[3545]: E1029 23:32:54.990642 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:752c5f95daab45aab789fd633a80c4d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c7fdf84f6-glqpf_calico-system(e0ab4b8c-c86a-446f-bf29-1179e47cdecc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:54.995206 containerd[2015]: time="2025-10-29T23:32:54.995128326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 23:32:55.299903 containerd[2015]: time="2025-10-29T23:32:55.299726367Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:32:55.303957 containerd[2015]: time="2025-10-29T23:32:55.303864735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 23:32:55.304094 containerd[2015]: time="2025-10-29T23:32:55.303876327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 23:32:55.305289 kubelet[3545]: E1029 23:32:55.305216 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:55.305876 kubelet[3545]: E1029 23:32:55.305284 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:32:55.305876 kubelet[3545]: E1029 23:32:55.305450 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c7fdf84f6-glqpf_calico-system(e0ab4b8c-c86a-446f-bf29-1179e47cdecc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 23:32:55.306910 kubelet[3545]: E1029 23:32:55.306732 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:32:57.456488 systemd[1]: Started sshd@22-172.31.30.28:22-139.178.89.65:41830.service - OpenSSH per-connection server daemon (139.178.89.65:41830). Oct 29 23:32:57.658782 sshd[5762]: Accepted publickey for core from 139.178.89.65 port 41830 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:32:57.660870 sshd-session[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:32:57.671076 systemd-logind[1985]: New session 23 of user core. Oct 29 23:32:57.677951 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 29 23:32:57.985421 sshd[5765]: Connection closed by 139.178.89.65 port 41830 Oct 29 23:32:57.986458 sshd-session[5762]: pam_unix(sshd:session): session closed for user core Oct 29 23:32:57.997360 systemd[1]: sshd@22-172.31.30.28:22-139.178.89.65:41830.service: Deactivated successfully. Oct 29 23:32:58.004515 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 23:32:58.008082 systemd-logind[1985]: Session 23 logged out. Waiting for processes to exit. Oct 29 23:32:58.014545 systemd-logind[1985]: Removed session 23. Oct 29 23:33:02.698278 containerd[2015]: time="2025-10-29T23:33:02.697947072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:33:02.958934 containerd[2015]: time="2025-10-29T23:33:02.958630189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:02.961862 containerd[2015]: time="2025-10-29T23:33:02.961704325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:33:02.961862 containerd[2015]: time="2025-10-29T23:33:02.961765309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:02.962178 kubelet[3545]: E1029 23:33:02.962055 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:02.962706 kubelet[3545]: E1029 23:33:02.962153 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:02.962706 kubelet[3545]: E1029 23:33:02.962376 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8mh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd4c4c89-58cj5_calico-apiserver(2788a03f-6870-4386-aa19-59a40e87a133): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:02.964780 kubelet[3545]: E1029 23:33:02.964711 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:33:03.029112 systemd[1]: Started sshd@23-172.31.30.28:22-139.178.89.65:41840.service - OpenSSH per-connection server daemon (139.178.89.65:41840). Oct 29 23:33:03.237915 sshd[5781]: Accepted publickey for core from 139.178.89.65 port 41840 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:33:03.240374 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:03.252053 systemd-logind[1985]: New session 24 of user core. Oct 29 23:33:03.259955 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 29 23:33:03.538287 sshd[5784]: Connection closed by 139.178.89.65 port 41840 Oct 29 23:33:03.539178 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:03.551286 systemd-logind[1985]: Session 24 logged out. Waiting for processes to exit. Oct 29 23:33:03.552909 systemd[1]: sshd@23-172.31.30.28:22-139.178.89.65:41840.service: Deactivated successfully. Oct 29 23:33:03.558613 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 23:33:03.566107 systemd-logind[1985]: Removed session 24. Oct 29 23:33:03.700095 containerd[2015]: time="2025-10-29T23:33:03.699971425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:33:03.982877 containerd[2015]: time="2025-10-29T23:33:03.982666262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:03.985054 containerd[2015]: time="2025-10-29T23:33:03.984983966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:33:03.985330 containerd[2015]: time="2025-10-29T23:33:03.985049474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:03.985568 kubelet[3545]: E1029 23:33:03.985496 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:03.986341 kubelet[3545]: E1029 23:33:03.985567 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:03.986341 kubelet[3545]: E1029 23:33:03.985899 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgclx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd4c4c89-kqbtj_calico-apiserver(fa52b929-eb21-441a-b4e7-cea898f2ddc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:03.987244 containerd[2015]: time="2025-10-29T23:33:03.987188690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 23:33:03.987865 kubelet[3545]: E1029 23:33:03.987060 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:33:04.263723 containerd[2015]: time="2025-10-29T23:33:04.263117652Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:04.265608 containerd[2015]: time="2025-10-29T23:33:04.265459032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 23:33:04.265999 containerd[2015]: time="2025-10-29T23:33:04.265709868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 23:33:04.266381 kubelet[3545]: E1029 23:33:04.266320 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:04.266503 kubelet[3545]: E1029 23:33:04.266396 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:04.266785 kubelet[3545]: E1029 23:33:04.266590 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbnrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f68d4cfbc-mbbhz_calico-system(a76d1c5c-b32f-4f1f-b7bc-93c38286ef75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:04.267898 kubelet[3545]: E1029 23:33:04.267773 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:33:04.699014 containerd[2015]: time="2025-10-29T23:33:04.698741186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 23:33:04.987019 containerd[2015]: time="2025-10-29T23:33:04.986753139Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:04.989177 containerd[2015]: time="2025-10-29T23:33:04.988968747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 23:33:04.989177 containerd[2015]: time="2025-10-29T23:33:04.989111511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 23:33:04.989592 kubelet[3545]: E1029 23:33:04.989462 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:04.989592 kubelet[3545]: E1029 23:33:04.989576 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:04.991239 kubelet[3545]: E1029 23:33:04.989865 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:04.994072 containerd[2015]: time="2025-10-29T23:33:04.994009623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 23:33:05.282353 containerd[2015]: time="2025-10-29T23:33:05.281735137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:05.284258 containerd[2015]: time="2025-10-29T23:33:05.284107657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 23:33:05.284258 containerd[2015]: time="2025-10-29T23:33:05.284185345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 23:33:05.284700 kubelet[3545]: E1029 23:33:05.284396 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:33:05.284700 kubelet[3545]: E1029 23:33:05.284458 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:33:05.285242 kubelet[3545]: E1029 23:33:05.285161 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:05.286627 kubelet[3545]: E1029 23:33:05.286449 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:33:05.700367 containerd[2015]: time="2025-10-29T23:33:05.700300587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 23:33:05.981087 containerd[2015]: time="2025-10-29T23:33:05.980927536Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:05.983223 containerd[2015]: time="2025-10-29T23:33:05.983134348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 23:33:05.983537 containerd[2015]: time="2025-10-29T23:33:05.983264680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:05.983666 kubelet[3545]: E1029 23:33:05.983498 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:33:05.983666 kubelet[3545]: E1029 23:33:05.983563 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:33:05.983958 kubelet[3545]: E1029 23:33:05.983804 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx6pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lq8fh_calico-system(0e56094f-29e3-42d4-b70d-e871179d5468): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:05.985171 kubelet[3545]: E1029 23:33:05.985092 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:33:08.576106 systemd[1]: Started sshd@24-172.31.30.28:22-139.178.89.65:58784.service - OpenSSH per-connection server daemon (139.178.89.65:58784). Oct 29 23:33:08.700343 kubelet[3545]: E1029 23:33:08.700225 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:33:08.782581 sshd[5796]: Accepted publickey for core from 139.178.89.65 port 58784 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:33:08.786290 sshd-session[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:08.799321 systemd-logind[1985]: New session 25 of user core. Oct 29 23:33:08.808935 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 29 23:33:09.081192 sshd[5799]: Connection closed by 139.178.89.65 port 58784 Oct 29 23:33:09.082990 sshd-session[5796]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:09.092372 systemd[1]: sshd@24-172.31.30.28:22-139.178.89.65:58784.service: Deactivated successfully. Oct 29 23:33:09.099686 systemd[1]: session-25.scope: Deactivated successfully. Oct 29 23:33:09.103973 systemd-logind[1985]: Session 25 logged out. Waiting for processes to exit. Oct 29 23:33:09.108164 systemd-logind[1985]: Removed session 25. Oct 29 23:33:13.399557 containerd[2015]: time="2025-10-29T23:33:13.399440721Z" level=info msg="TaskExit event in podsandbox handler container_id:\"148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a\" id:\"6504f30bb9a22121750e02766a50cc2a7daccf0d5b77da65f4e693d00020a876\" pid:5822 exited_at:{seconds:1761780793 nanos:399060945}" Oct 29 23:33:14.128118 systemd[1]: Started sshd@25-172.31.30.28:22-139.178.89.65:58792.service - OpenSSH per-connection server daemon (139.178.89.65:58792). Oct 29 23:33:14.340943 sshd[5837]: Accepted publickey for core from 139.178.89.65 port 58792 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:33:14.344804 sshd-session[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:14.356771 systemd-logind[1985]: New session 26 of user core. Oct 29 23:33:14.366300 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 29 23:33:14.680747 sshd[5840]: Connection closed by 139.178.89.65 port 58792 Oct 29 23:33:14.682521 sshd-session[5837]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:14.694072 systemd[1]: sshd@25-172.31.30.28:22-139.178.89.65:58792.service: Deactivated successfully. Oct 29 23:33:14.698901 kubelet[3545]: E1029 23:33:14.698781 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:33:14.702616 systemd[1]: session-26.scope: Deactivated successfully. Oct 29 23:33:14.708152 systemd-logind[1985]: Session 26 logged out. Waiting for processes to exit. Oct 29 23:33:14.712411 systemd-logind[1985]: Removed session 26. Oct 29 23:33:15.697344 kubelet[3545]: E1029 23:33:15.697209 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:33:16.698382 kubelet[3545]: E1029 23:33:16.698251 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:33:18.698456 kubelet[3545]: E1029 23:33:18.698305 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:33:19.734908 systemd[1]: Started sshd@26-172.31.30.28:22-139.178.89.65:48654.service - OpenSSH per-connection server daemon (139.178.89.65:48654). Oct 29 23:33:19.953609 sshd[5853]: Accepted publickey for core from 139.178.89.65 port 48654 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:33:19.958190 sshd-session[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:19.972959 systemd-logind[1985]: New session 27 of user core. Oct 29 23:33:19.977959 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 29 23:33:20.271545 sshd[5857]: Connection closed by 139.178.89.65 port 48654 Oct 29 23:33:20.272432 sshd-session[5853]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:20.282219 systemd[1]: sshd@26-172.31.30.28:22-139.178.89.65:48654.service: Deactivated successfully. Oct 29 23:33:20.287887 systemd[1]: session-27.scope: Deactivated successfully. Oct 29 23:33:20.290605 systemd-logind[1985]: Session 27 logged out. Waiting for processes to exit. Oct 29 23:33:20.295921 systemd-logind[1985]: Removed session 27. Oct 29 23:33:20.699361 kubelet[3545]: E1029 23:33:20.699283 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:33:23.703238 kubelet[3545]: E1029 23:33:23.703002 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:33:25.313877 systemd[1]: Started sshd@27-172.31.30.28:22-139.178.89.65:48666.service - OpenSSH per-connection server daemon (139.178.89.65:48666). Oct 29 23:33:25.511677 sshd[5871]: Accepted publickey for core from 139.178.89.65 port 48666 ssh2: RSA SHA256:vCeJlONcZECHmny0G3wOrs0hr6RKqf7GCxdKXo+s1Pc Oct 29 23:33:25.513802 sshd-session[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 29 23:33:25.524817 systemd-logind[1985]: New session 28 of user core. Oct 29 23:33:25.530193 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 29 23:33:25.702683 kubelet[3545]: E1029 23:33:25.700719 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:33:25.845865 sshd[5874]: Connection closed by 139.178.89.65 port 48666 Oct 29 23:33:25.847470 sshd-session[5871]: pam_unix(sshd:session): session closed for user core Oct 29 23:33:25.859544 systemd-logind[1985]: Session 28 logged out. Waiting for processes to exit. Oct 29 23:33:25.864114 systemd[1]: sshd@27-172.31.30.28:22-139.178.89.65:48666.service: Deactivated successfully. Oct 29 23:33:25.874373 systemd[1]: session-28.scope: Deactivated successfully. Oct 29 23:33:25.883798 systemd-logind[1985]: Removed session 28. Oct 29 23:33:26.698926 kubelet[3545]: E1029 23:33:26.698852 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:33:29.702420 kubelet[3545]: E1029 23:33:29.702350 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:33:30.700965 kubelet[3545]: E1029 23:33:30.700840 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:33:33.701400 kubelet[3545]: E1029 23:33:33.701193 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:33:36.699215 containerd[2015]: time="2025-10-29T23:33:36.699142593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 23:33:36.993145 containerd[2015]: time="2025-10-29T23:33:36.992832070Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:36.995112 containerd[2015]: time="2025-10-29T23:33:36.994981822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 23:33:36.995236 containerd[2015]: time="2025-10-29T23:33:36.995106046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 29 23:33:36.995582 kubelet[3545]: E1029 23:33:36.995507 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:33:36.996103 kubelet[3545]: E1029 23:33:36.995604 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 23:33:36.996396 kubelet[3545]: E1029 23:33:36.996318 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:752c5f95daab45aab789fd633a80c4d0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c7fdf84f6-glqpf_calico-system(e0ab4b8c-c86a-446f-bf29-1179e47cdecc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:36.999214 containerd[2015]: time="2025-10-29T23:33:36.999148246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 23:33:37.283316 containerd[2015]: time="2025-10-29T23:33:37.283159688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:37.285475 containerd[2015]: time="2025-10-29T23:33:37.285394964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 23:33:37.285748 containerd[2015]: time="2025-10-29T23:33:37.285522968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 29 23:33:37.285980 kubelet[3545]: E1029 23:33:37.285932 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:33:37.286153 kubelet[3545]: E1029 23:33:37.286124 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 23:33:37.286666 kubelet[3545]: E1029 23:33:37.286521 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgqwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c7fdf84f6-glqpf_calico-system(e0ab4b8c-c86a-446f-bf29-1179e47cdecc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:37.288841 kubelet[3545]: E1029 23:33:37.288761 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:33:39.702051 kubelet[3545]: E1029 23:33:39.700151 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:33:41.703854 kubelet[3545]: E1029 23:33:41.701808 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:33:43.431688 containerd[2015]: time="2025-10-29T23:33:43.430913474Z" level=info msg="TaskExit event in podsandbox handler container_id:\"148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a\" id:\"731b756f2c4798ba06c1e5b9678d6bc78f3e5b3050649e7e7c0ff947ebcea4ee\" pid:5907 exit_status:1 exited_at:{seconds:1761780823 nanos:430053638}" Oct 29 23:33:44.699056 containerd[2015]: time="2025-10-29T23:33:44.698975848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:33:44.702543 kubelet[3545]: E1029 23:33:44.702442 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:33:44.952461 containerd[2015]: time="2025-10-29T23:33:44.951935238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:44.954407 containerd[2015]: time="2025-10-29T23:33:44.954269070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:33:44.954557 containerd[2015]: time="2025-10-29T23:33:44.954338526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:44.955200 kubelet[3545]: E1029 23:33:44.954829 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:44.955200 kubelet[3545]: E1029 23:33:44.954894 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:44.955200 kubelet[3545]: E1029 23:33:44.955097 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgclx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd4c4c89-kqbtj_calico-apiserver(fa52b929-eb21-441a-b4e7-cea898f2ddc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:44.956882 kubelet[3545]: E1029 23:33:44.956805 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:33:45.698167 kubelet[3545]: E1029 23:33:45.698105 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:33:48.700875 kubelet[3545]: E1029 23:33:48.700792 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:33:52.698272 containerd[2015]: time="2025-10-29T23:33:52.697929732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 23:33:52.945917 containerd[2015]: time="2025-10-29T23:33:52.945830713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:52.949523 containerd[2015]: time="2025-10-29T23:33:52.948763141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 23:33:52.949523 containerd[2015]: time="2025-10-29T23:33:52.948795781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 29 23:33:52.949772 kubelet[3545]: E1029 23:33:52.949093 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:52.949772 kubelet[3545]: E1029 23:33:52.949176 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 23:33:52.949772 kubelet[3545]: E1029 23:33:52.949373 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbnrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f68d4cfbc-mbbhz_calico-system(a76d1c5c-b32f-4f1f-b7bc-93c38286ef75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:52.951076 kubelet[3545]: E1029 23:33:52.950637 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:33:53.703753 containerd[2015]: time="2025-10-29T23:33:53.703377529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 23:33:53.982051 containerd[2015]: time="2025-10-29T23:33:53.981831171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:53.984024 containerd[2015]: time="2025-10-29T23:33:53.983952747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 23:33:53.984177 containerd[2015]: time="2025-10-29T23:33:53.984083199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:53.984576 kubelet[3545]: E1029 23:33:53.984441 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:53.984576 kubelet[3545]: E1029 23:33:53.984527 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 23:33:53.986760 kubelet[3545]: E1029 23:33:53.985379 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8mh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cfd4c4c89-58cj5_calico-apiserver(2788a03f-6870-4386-aa19-59a40e87a133): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:53.987209 kubelet[3545]: E1029 23:33:53.987134 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:33:55.699641 kubelet[3545]: E1029 23:33:55.699546 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:33:57.699678 containerd[2015]: time="2025-10-29T23:33:57.697998593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 23:33:57.960167 containerd[2015]: time="2025-10-29T23:33:57.959885730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:57.962359 containerd[2015]: time="2025-10-29T23:33:57.962032134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 23:33:57.962633 containerd[2015]: time="2025-10-29T23:33:57.962101578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 29 23:33:57.963446 kubelet[3545]: E1029 23:33:57.963303 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:33:57.965515 kubelet[3545]: E1029 23:33:57.964734 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 23:33:57.965515 kubelet[3545]: E1029 23:33:57.964964 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx6pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lq8fh_calico-system(0e56094f-29e3-42d4-b70d-e871179d5468): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:57.966937 kubelet[3545]: E1029 23:33:57.966844 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:33:59.711370 containerd[2015]: time="2025-10-29T23:33:59.711012763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 23:33:59.988479 containerd[2015]: time="2025-10-29T23:33:59.988202612Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:33:59.990590 containerd[2015]: time="2025-10-29T23:33:59.990430544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 23:33:59.990590 containerd[2015]: time="2025-10-29T23:33:59.990457232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 29 23:33:59.991034 kubelet[3545]: E1029 23:33:59.990981 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:59.993204 kubelet[3545]: E1029 23:33:59.992750 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 23:33:59.993204 kubelet[3545]: E1029 23:33:59.993085 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 23:33:59.996818 containerd[2015]: time="2025-10-29T23:33:59.996547844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 23:34:00.275565 containerd[2015]: time="2025-10-29T23:34:00.274731102Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 29 23:34:00.277475 containerd[2015]: time="2025-10-29T23:34:00.277310070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 23:34:00.277475 containerd[2015]: time="2025-10-29T23:34:00.277386786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 29 23:34:00.279195 kubelet[3545]: E1029 23:34:00.278811 3545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:34:00.279195 kubelet[3545]: E1029 23:34:00.278873 3545 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 23:34:00.279195 kubelet[3545]: E1029 23:34:00.279046 3545 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xl4cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wm5cb_calico-system(a3348575-d754-476b-94b5-28b2df5efe85): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 23:34:00.280750 kubelet[3545]: E1029 23:34:00.280641 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:34:01.704566 kubelet[3545]: E1029 23:34:01.704424 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:34:04.697570 kubelet[3545]: E1029 23:34:04.697489 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:34:06.697356 kubelet[3545]: E1029 23:34:06.697276 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:34:07.698397 kubelet[3545]: E1029 23:34:07.698261 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:34:11.698001 kubelet[3545]: E1029 23:34:11.697905 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lq8fh" podUID="0e56094f-29e3-42d4-b70d-e871179d5468" Oct 29 23:34:12.698666 kubelet[3545]: E1029 23:34:12.698557 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wm5cb" podUID="a3348575-d754-476b-94b5-28b2df5efe85" Oct 29 23:34:12.699446 kubelet[3545]: E1029 23:34:12.698783 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c7fdf84f6-glqpf" podUID="e0ab4b8c-c86a-446f-bf29-1179e47cdecc" Oct 29 23:34:12.787919 systemd[1]: cri-containerd-d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44.scope: Deactivated successfully. Oct 29 23:34:12.788825 systemd[1]: cri-containerd-d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44.scope: Consumed 38.793s CPU time, 107M memory peak. Oct 29 23:34:12.793391 containerd[2015]: time="2025-10-29T23:34:12.793328192Z" level=info msg="received exit event container_id:\"d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44\" id:\"d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44\" pid:3862 exit_status:1 exited_at:{seconds:1761780852 nanos:792529532}" Oct 29 23:34:12.794435 containerd[2015]: time="2025-10-29T23:34:12.793886228Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44\" id:\"d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44\" pid:3862 exit_status:1 exited_at:{seconds:1761780852 nanos:792529532}" Oct 29 23:34:12.834814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44-rootfs.mount: Deactivated successfully. Oct 29 23:34:12.853883 kubelet[3545]: E1029 23:34:12.853827 3545 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-28?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 29 23:34:13.072506 systemd[1]: cri-containerd-753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86.scope: Deactivated successfully. Oct 29 23:34:13.073344 systemd[1]: cri-containerd-753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86.scope: Consumed 5.647s CPU time, 58M memory peak. Oct 29 23:34:13.079733 containerd[2015]: time="2025-10-29T23:34:13.079585637Z" level=info msg="received exit event container_id:\"753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86\" id:\"753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86\" pid:3221 exit_status:1 exited_at:{seconds:1761780853 nanos:78934925}" Oct 29 23:34:13.080378 containerd[2015]: time="2025-10-29T23:34:13.079603001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86\" id:\"753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86\" pid:3221 exit_status:1 exited_at:{seconds:1761780853 nanos:78934925}" Oct 29 23:34:13.136064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86-rootfs.mount: Deactivated successfully. Oct 29 23:34:13.258538 containerd[2015]: time="2025-10-29T23:34:13.258422550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"148d70afdcbb115987530b81460f3ecc65c2fca54f6ddc448811b44764115c0a\" id:\"e494f7b4d53b60680318eafb6cb1703ce103e26943d8acedccd714826acc09bf\" pid:5981 exit_status:1 exited_at:{seconds:1761780853 nanos:257609574}" Oct 29 23:34:13.610988 kubelet[3545]: I1029 23:34:13.610926 3545 scope.go:117] "RemoveContainer" containerID="753d6805008581aadc45fb305e906e778ccaf08af18e6401d01f852fbd9f9f86" Oct 29 23:34:13.615547 kubelet[3545]: I1029 23:34:13.615493 3545 scope.go:117] "RemoveContainer" containerID="d83f062a1058ef21c7d0b655ec18d4b78c816cc73f417ef0ff816d4fa2603e44" Oct 29 23:34:13.617348 containerd[2015]: time="2025-10-29T23:34:13.617280116Z" level=info msg="CreateContainer within sandbox \"7fb325a92a9081acdc8162b67f1ae9391899759055a7291122f4e6ffacb79080\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 29 23:34:13.621308 containerd[2015]: time="2025-10-29T23:34:13.621036512Z" level=info msg="CreateContainer within sandbox \"7430417936ea96d8fa598aa08917ab2fa63932f22ea5445f3dbb52dfaf73be4a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 29 23:34:13.639753 containerd[2015]: time="2025-10-29T23:34:13.639553292Z" level=info msg="Container eafa263be5b9a9db8c2dd4a6225a1ce9c72b9608c13b7753828ed9c46d5bbabe: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:34:13.654544 containerd[2015]: time="2025-10-29T23:34:13.654448460Z" level=info msg="Container 536b6a4da20617b718fe92b22b3f1ee33a7698038fff009635842b43cd0ceb92: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:34:13.668211 containerd[2015]: time="2025-10-29T23:34:13.668052968Z" level=info msg="CreateContainer within sandbox \"7fb325a92a9081acdc8162b67f1ae9391899759055a7291122f4e6ffacb79080\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"eafa263be5b9a9db8c2dd4a6225a1ce9c72b9608c13b7753828ed9c46d5bbabe\"" Oct 29 23:34:13.669444 containerd[2015]: time="2025-10-29T23:34:13.669243188Z" level=info msg="StartContainer for \"eafa263be5b9a9db8c2dd4a6225a1ce9c72b9608c13b7753828ed9c46d5bbabe\"" Oct 29 23:34:13.673016 containerd[2015]: time="2025-10-29T23:34:13.672947120Z" level=info msg="connecting to shim eafa263be5b9a9db8c2dd4a6225a1ce9c72b9608c13b7753828ed9c46d5bbabe" address="unix:///run/containerd/s/7e866d3b76445c102bc7af9916b5fdc746c3555e3ff3a73f07828eecc34d8fd1" protocol=ttrpc version=3 Oct 29 23:34:13.675442 containerd[2015]: time="2025-10-29T23:34:13.675380324Z" level=info msg="CreateContainer within sandbox \"7430417936ea96d8fa598aa08917ab2fa63932f22ea5445f3dbb52dfaf73be4a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"536b6a4da20617b718fe92b22b3f1ee33a7698038fff009635842b43cd0ceb92\"" Oct 29 23:34:13.677911 containerd[2015]: time="2025-10-29T23:34:13.677854916Z" level=info msg="StartContainer for \"536b6a4da20617b718fe92b22b3f1ee33a7698038fff009635842b43cd0ceb92\"" Oct 29 23:34:13.680080 containerd[2015]: time="2025-10-29T23:34:13.679979600Z" level=info msg="connecting to shim 536b6a4da20617b718fe92b22b3f1ee33a7698038fff009635842b43cd0ceb92" address="unix:///run/containerd/s/2a6f479d788c9c0ff8dec5ae0f111ef51f9f12e6361804d3ad59442219c676db" protocol=ttrpc version=3 Oct 29 23:34:13.713073 systemd[1]: Started cri-containerd-eafa263be5b9a9db8c2dd4a6225a1ce9c72b9608c13b7753828ed9c46d5bbabe.scope - libcontainer container eafa263be5b9a9db8c2dd4a6225a1ce9c72b9608c13b7753828ed9c46d5bbabe. Oct 29 23:34:13.732954 systemd[1]: Started cri-containerd-536b6a4da20617b718fe92b22b3f1ee33a7698038fff009635842b43cd0ceb92.scope - libcontainer container 536b6a4da20617b718fe92b22b3f1ee33a7698038fff009635842b43cd0ceb92. Oct 29 23:34:13.845418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652298029.mount: Deactivated successfully. Oct 29 23:34:13.868909 containerd[2015]: time="2025-10-29T23:34:13.868039413Z" level=info msg="StartContainer for \"536b6a4da20617b718fe92b22b3f1ee33a7698038fff009635842b43cd0ceb92\" returns successfully" Oct 29 23:34:13.885417 containerd[2015]: time="2025-10-29T23:34:13.884922861Z" level=info msg="StartContainer for \"eafa263be5b9a9db8c2dd4a6225a1ce9c72b9608c13b7753828ed9c46d5bbabe\" returns successfully" Oct 29 23:34:17.179883 systemd[1]: cri-containerd-aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d.scope: Deactivated successfully. Oct 29 23:34:17.181902 systemd[1]: cri-containerd-aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d.scope: Consumed 4.732s CPU time, 23.3M memory peak. Oct 29 23:34:17.186727 containerd[2015]: time="2025-10-29T23:34:17.186587638Z" level=info msg="received exit event container_id:\"aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d\" id:\"aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d\" pid:3205 exit_status:1 exited_at:{seconds:1761780857 nanos:186089182}" Oct 29 23:34:17.188788 containerd[2015]: time="2025-10-29T23:34:17.188255806Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d\" id:\"aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d\" pid:3205 exit_status:1 exited_at:{seconds:1761780857 nanos:186089182}" Oct 29 23:34:17.229381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d-rootfs.mount: Deactivated successfully. Oct 29 23:34:17.645320 kubelet[3545]: I1029 23:34:17.645165 3545 scope.go:117] "RemoveContainer" containerID="aa8ea618885f3305faa6f4678127414ff5974eabbcedc62f93fe0b57c8dc932d" Oct 29 23:34:17.649351 containerd[2015]: time="2025-10-29T23:34:17.649277880Z" level=info msg="CreateContainer within sandbox \"db855f637184d7c01eced28a41928d4dc5c72242337f35e7669297415d6e0add\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 29 23:34:17.669815 containerd[2015]: time="2025-10-29T23:34:17.668807232Z" level=info msg="Container 1de137e507eb49cdd926923aa98d62aa4f112f82fa6517ca3b635a1f1618dc96: CDI devices from CRI Config.CDIDevices: []" Oct 29 23:34:17.689541 containerd[2015]: time="2025-10-29T23:34:17.689467440Z" level=info msg="CreateContainer within sandbox \"db855f637184d7c01eced28a41928d4dc5c72242337f35e7669297415d6e0add\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1de137e507eb49cdd926923aa98d62aa4f112f82fa6517ca3b635a1f1618dc96\"" Oct 29 23:34:17.691292 containerd[2015]: time="2025-10-29T23:34:17.690707568Z" level=info msg="StartContainer for \"1de137e507eb49cdd926923aa98d62aa4f112f82fa6517ca3b635a1f1618dc96\"" Oct 29 23:34:17.693300 containerd[2015]: time="2025-10-29T23:34:17.693252000Z" level=info msg="connecting to shim 1de137e507eb49cdd926923aa98d62aa4f112f82fa6517ca3b635a1f1618dc96" address="unix:///run/containerd/s/3863b3d2526d9f775381d79a69f718fe61b50e7d4810e5d0b587843e65733490" protocol=ttrpc version=3 Oct 29 23:34:17.741976 systemd[1]: Started cri-containerd-1de137e507eb49cdd926923aa98d62aa4f112f82fa6517ca3b635a1f1618dc96.scope - libcontainer container 1de137e507eb49cdd926923aa98d62aa4f112f82fa6517ca3b635a1f1618dc96. Oct 29 23:34:17.827398 containerd[2015]: time="2025-10-29T23:34:17.827305993Z" level=info msg="StartContainer for \"1de137e507eb49cdd926923aa98d62aa4f112f82fa6517ca3b635a1f1618dc96\" returns successfully" Oct 29 23:34:18.698035 kubelet[3545]: E1029 23:34:18.697968 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f68d4cfbc-mbbhz" podUID="a76d1c5c-b32f-4f1f-b7bc-93c38286ef75" Oct 29 23:34:21.697476 kubelet[3545]: E1029 23:34:21.697269 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-58cj5" podUID="2788a03f-6870-4386-aa19-59a40e87a133" Oct 29 23:34:21.697476 kubelet[3545]: E1029 23:34:21.697397 3545 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cfd4c4c89-kqbtj" podUID="fa52b929-eb21-441a-b4e7-cea898f2ddc5" Oct 29 23:34:22.855807 kubelet[3545]: E1029 23:34:22.855729 3545 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-28?timeout=10s\": context deadline exceeded"