Nov 23 22:58:20.112589 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 23 22:58:20.112634 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:49:09 -00 2025 Nov 23 22:58:20.112657 kernel: KASLR disabled due to lack of seed Nov 23 22:58:20.112673 kernel: efi: EFI v2.7 by EDK II Nov 23 22:58:20.112688 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Nov 23 22:58:20.112703 kernel: secureboot: Secure boot disabled Nov 23 22:58:20.112720 kernel: ACPI: Early table checksum verification disabled Nov 23 22:58:20.112771 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 23 22:58:20.112790 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 23 22:58:20.112805 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 23 22:58:20.112820 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 23 22:58:20.112841 kernel: ACPI: FACS 0x0000000078630000 000040 Nov 23 22:58:20.112856 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 23 22:58:20.112874 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 23 22:58:20.112891 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 23 22:58:20.112907 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 23 22:58:20.112927 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 23 22:58:20.112942 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 23 22:58:20.112958 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 23 22:58:20.112974 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 23 22:58:20.112989 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 23 22:58:20.113005 kernel: printk: legacy bootconsole [uart0] enabled Nov 23 22:58:20.113020 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 22:58:20.113036 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 23 22:58:20.113052 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Nov 23 22:58:20.113068 kernel: Zone ranges: Nov 23 22:58:20.113083 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 23 22:58:20.113103 kernel: DMA32 empty Nov 23 22:58:20.113118 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 23 22:58:20.113133 kernel: Device empty Nov 23 22:58:20.113148 kernel: Movable zone start for each node Nov 23 22:58:20.113163 kernel: Early memory node ranges Nov 23 22:58:20.113179 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 23 22:58:20.113194 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 23 22:58:20.113209 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 23 22:58:20.113224 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 23 22:58:20.113240 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 23 22:58:20.113256 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 23 22:58:20.113271 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 23 22:58:20.113290 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 23 22:58:20.113312 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 23 22:58:20.113329 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 23 22:58:20.113345 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Nov 23 22:58:20.113361 kernel: psci: probing for conduit method from ACPI. Nov 23 22:58:20.113381 kernel: psci: PSCIv1.0 detected in firmware. Nov 23 22:58:20.113397 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 22:58:20.113413 kernel: psci: Trusted OS migration not required Nov 23 22:58:20.113429 kernel: psci: SMC Calling Convention v1.1 Nov 23 22:58:20.113446 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 23 22:58:20.113462 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 22:58:20.113479 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 22:58:20.113496 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 22:58:20.113512 kernel: Detected PIPT I-cache on CPU0 Nov 23 22:58:20.113529 kernel: CPU features: detected: GIC system register CPU interface Nov 23 22:58:20.113545 kernel: CPU features: detected: Spectre-v2 Nov 23 22:58:20.113565 kernel: CPU features: detected: Spectre-v3a Nov 23 22:58:20.113581 kernel: CPU features: detected: Spectre-BHB Nov 23 22:58:20.113597 kernel: CPU features: detected: ARM erratum 1742098 Nov 23 22:58:20.113614 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 23 22:58:20.113629 kernel: alternatives: applying boot alternatives Nov 23 22:58:20.113648 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:58:20.113665 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 22:58:20.113681 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 22:58:20.113698 kernel: Fallback order for Node 0: 0 Nov 23 22:58:20.113714 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Nov 23 22:58:20.115779 kernel: Policy zone: Normal Nov 23 22:58:20.115811 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 22:58:20.115828 kernel: software IO TLB: area num 2. Nov 23 22:58:20.115845 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Nov 23 22:58:20.115883 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 22:58:20.115901 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 22:58:20.115919 kernel: rcu: RCU event tracing is enabled. Nov 23 22:58:20.115936 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 22:58:20.115952 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 22:58:20.115969 kernel: Tracing variant of Tasks RCU enabled. Nov 23 22:58:20.115985 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 22:58:20.116001 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 22:58:20.116024 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:58:20.116041 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:58:20.116057 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 22:58:20.116074 kernel: GICv3: 96 SPIs implemented Nov 23 22:58:20.116118 kernel: GICv3: 0 Extended SPIs implemented Nov 23 22:58:20.116155 kernel: Root IRQ handler: gic_handle_irq Nov 23 22:58:20.116176 kernel: GICv3: GICv3 features: 16 PPIs Nov 23 22:58:20.116193 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 22:58:20.116210 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 23 22:58:20.116226 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 23 22:58:20.116242 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Nov 23 22:58:20.116260 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Nov 23 22:58:20.116282 kernel: GICv3: using LPI property table @0x0000000400110000 Nov 23 22:58:20.116298 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 23 22:58:20.116314 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Nov 23 22:58:20.116331 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 22:58:20.116347 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 23 22:58:20.116364 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 23 22:58:20.116381 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 23 22:58:20.116397 kernel: Console: colour dummy device 80x25 Nov 23 22:58:20.116414 kernel: printk: legacy console [tty1] enabled Nov 23 22:58:20.116431 kernel: ACPI: Core revision 20240827 Nov 23 22:58:20.116448 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 23 22:58:20.116469 kernel: pid_max: default: 32768 minimum: 301 Nov 23 22:58:20.116486 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 22:58:20.116502 kernel: landlock: Up and running. Nov 23 22:58:20.116519 kernel: SELinux: Initializing. Nov 23 22:58:20.116535 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:58:20.116552 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:58:20.116569 kernel: rcu: Hierarchical SRCU implementation. Nov 23 22:58:20.116585 kernel: rcu: Max phase no-delay instances is 400. Nov 23 22:58:20.116602 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 22:58:20.116622 kernel: Remapping and enabling EFI services. Nov 23 22:58:20.116639 kernel: smp: Bringing up secondary CPUs ... Nov 23 22:58:20.116655 kernel: Detected PIPT I-cache on CPU1 Nov 23 22:58:20.116672 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 23 22:58:20.116688 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Nov 23 22:58:20.116705 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 23 22:58:20.116721 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 22:58:20.116765 kernel: SMP: Total of 2 processors activated. Nov 23 22:58:20.116785 kernel: CPU: All CPU(s) started at EL1 Nov 23 22:58:20.116817 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 22:58:20.116835 kernel: CPU features: detected: 32-bit EL1 Support Nov 23 22:58:20.116856 kernel: CPU features: detected: CRC32 instructions Nov 23 22:58:20.116873 kernel: alternatives: applying system-wide alternatives Nov 23 22:58:20.116892 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Nov 23 22:58:20.116910 kernel: devtmpfs: initialized Nov 23 22:58:20.116928 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 22:58:20.116949 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 22:58:20.116967 kernel: 16880 pages in range for non-PLT usage Nov 23 22:58:20.116984 kernel: 508400 pages in range for PLT usage Nov 23 22:58:20.117001 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 22:58:20.117018 kernel: SMBIOS 3.0.0 present. Nov 23 22:58:20.117036 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 23 22:58:20.117053 kernel: DMI: Memory slots populated: 0/0 Nov 23 22:58:20.117070 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 22:58:20.117088 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 22:58:20.117110 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 22:58:20.117127 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 22:58:20.117145 kernel: audit: initializing netlink subsys (disabled) Nov 23 22:58:20.117162 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Nov 23 22:58:20.117179 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 22:58:20.117196 kernel: cpuidle: using governor menu Nov 23 22:58:20.117213 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 22:58:20.117230 kernel: ASID allocator initialised with 65536 entries Nov 23 22:58:20.117248 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 22:58:20.117269 kernel: Serial: AMBA PL011 UART driver Nov 23 22:58:20.117286 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 22:58:20.117303 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 22:58:20.117320 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 22:58:20.117337 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 22:58:20.117355 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 22:58:20.117372 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 22:58:20.117389 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 22:58:20.117406 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 22:58:20.117427 kernel: ACPI: Added _OSI(Module Device) Nov 23 22:58:20.117444 kernel: ACPI: Added _OSI(Processor Device) Nov 23 22:58:20.117461 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 22:58:20.117478 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 22:58:20.117495 kernel: ACPI: Interpreter enabled Nov 23 22:58:20.117513 kernel: ACPI: Using GIC for interrupt routing Nov 23 22:58:20.117530 kernel: ACPI: MCFG table detected, 1 entries Nov 23 22:58:20.117547 kernel: ACPI: CPU0 has been hot-added Nov 23 22:58:20.117564 kernel: ACPI: CPU1 has been hot-added Nov 23 22:58:20.117585 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Nov 23 22:58:20.118685 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 22:58:20.118980 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 22:58:20.119168 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 22:58:20.119355 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Nov 23 22:58:20.119539 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Nov 23 22:58:20.119564 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 23 22:58:20.119590 kernel: acpiphp: Slot [1] registered Nov 23 22:58:20.119609 kernel: acpiphp: Slot [2] registered Nov 23 22:58:20.119626 kernel: acpiphp: Slot [3] registered Nov 23 22:58:20.119644 kernel: acpiphp: Slot [4] registered Nov 23 22:58:20.119661 kernel: acpiphp: Slot [5] registered Nov 23 22:58:20.119678 kernel: acpiphp: Slot [6] registered Nov 23 22:58:20.119696 kernel: acpiphp: Slot [7] registered Nov 23 22:58:20.119713 kernel: acpiphp: Slot [8] registered Nov 23 22:58:20.119759 kernel: acpiphp: Slot [9] registered Nov 23 22:58:20.119779 kernel: acpiphp: Slot [10] registered Nov 23 22:58:20.119804 kernel: acpiphp: Slot [11] registered Nov 23 22:58:20.119821 kernel: acpiphp: Slot [12] registered Nov 23 22:58:20.119839 kernel: acpiphp: Slot [13] registered Nov 23 22:58:20.119877 kernel: acpiphp: Slot [14] registered Nov 23 22:58:20.119897 kernel: acpiphp: Slot [15] registered Nov 23 22:58:20.119915 kernel: acpiphp: Slot [16] registered Nov 23 22:58:20.119933 kernel: acpiphp: Slot [17] registered Nov 23 22:58:20.119950 kernel: acpiphp: Slot [18] registered Nov 23 22:58:20.119968 kernel: acpiphp: Slot [19] registered Nov 23 22:58:20.119992 kernel: acpiphp: Slot [20] registered Nov 23 22:58:20.120010 kernel: acpiphp: Slot [21] registered Nov 23 22:58:20.120028 kernel: acpiphp: Slot [22] registered Nov 23 22:58:20.120045 kernel: acpiphp: Slot [23] registered Nov 23 22:58:20.120062 kernel: acpiphp: Slot [24] registered Nov 23 22:58:20.120080 kernel: acpiphp: Slot [25] registered Nov 23 22:58:20.120097 kernel: acpiphp: Slot [26] registered Nov 23 22:58:20.120114 kernel: acpiphp: Slot [27] registered Nov 23 22:58:20.120131 kernel: acpiphp: Slot [28] registered Nov 23 22:58:20.120148 kernel: acpiphp: Slot [29] registered Nov 23 22:58:20.120170 kernel: acpiphp: Slot [30] registered Nov 23 22:58:20.120188 kernel: acpiphp: Slot [31] registered Nov 23 22:58:20.120206 kernel: PCI host bridge to bus 0000:00 Nov 23 22:58:20.120435 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 23 22:58:20.120606 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 22:58:20.121200 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 23 22:58:20.121380 kernel: pci_bus 0000:00: root bus resource [bus 00] Nov 23 22:58:20.121615 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Nov 23 22:58:20.121892 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Nov 23 22:58:20.122089 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Nov 23 22:58:20.122289 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Nov 23 22:58:20.122477 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Nov 23 22:58:20.122663 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 23 22:58:20.122904 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Nov 23 22:58:20.123092 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Nov 23 22:58:20.123278 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Nov 23 22:58:20.123460 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Nov 23 22:58:20.123642 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 23 22:58:20.123887 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 23 22:58:20.124061 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 22:58:20.124237 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 23 22:58:20.124263 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 22:58:20.124281 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 22:58:20.124300 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 22:58:20.124317 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 22:58:20.125003 kernel: iommu: Default domain type: Translated Nov 23 22:58:20.125031 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 22:58:20.125049 kernel: efivars: Registered efivars operations Nov 23 22:58:20.125067 kernel: vgaarb: loaded Nov 23 22:58:20.125093 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 22:58:20.125110 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 22:58:20.125128 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 22:58:20.125146 kernel: pnp: PnP ACPI init Nov 23 22:58:20.125397 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 23 22:58:20.125425 kernel: pnp: PnP ACPI: found 1 devices Nov 23 22:58:20.125443 kernel: NET: Registered PF_INET protocol family Nov 23 22:58:20.125460 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 22:58:20.125484 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 22:58:20.125502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 22:58:20.125519 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 22:58:20.125537 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 22:58:20.125555 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 22:58:20.125572 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:58:20.125590 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:58:20.125607 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 22:58:20.125625 kernel: PCI: CLS 0 bytes, default 64 Nov 23 22:58:20.125646 kernel: kvm [1]: HYP mode not available Nov 23 22:58:20.125663 kernel: Initialise system trusted keyrings Nov 23 22:58:20.125681 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 22:58:20.125698 kernel: Key type asymmetric registered Nov 23 22:58:20.125715 kernel: Asymmetric key parser 'x509' registered Nov 23 22:58:20.125769 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 22:58:20.125789 kernel: io scheduler mq-deadline registered Nov 23 22:58:20.125807 kernel: io scheduler kyber registered Nov 23 22:58:20.125824 kernel: io scheduler bfq registered Nov 23 22:58:20.126041 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 23 22:58:20.126067 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 22:58:20.126085 kernel: ACPI: button: Power Button [PWRB] Nov 23 22:58:20.126103 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 23 22:58:20.126120 kernel: ACPI: button: Sleep Button [SLPB] Nov 23 22:58:20.126138 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 22:58:20.126156 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 23 22:58:20.126350 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 23 22:58:20.126379 kernel: printk: legacy console [ttyS0] disabled Nov 23 22:58:20.126397 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 23 22:58:20.126414 kernel: printk: legacy console [ttyS0] enabled Nov 23 22:58:20.126432 kernel: printk: legacy bootconsole [uart0] disabled Nov 23 22:58:20.126449 kernel: thunder_xcv, ver 1.0 Nov 23 22:58:20.126466 kernel: thunder_bgx, ver 1.0 Nov 23 22:58:20.126483 kernel: nicpf, ver 1.0 Nov 23 22:58:20.126500 kernel: nicvf, ver 1.0 Nov 23 22:58:20.126704 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 22:58:20.128801 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T22:58:19 UTC (1763938699) Nov 23 22:58:20.128837 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 22:58:20.128856 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Nov 23 22:58:20.128874 kernel: NET: Registered PF_INET6 protocol family Nov 23 22:58:20.128892 kernel: watchdog: NMI not fully supported Nov 23 22:58:20.128910 kernel: watchdog: Hard watchdog permanently disabled Nov 23 22:58:20.128928 kernel: Segment Routing with IPv6 Nov 23 22:58:20.128945 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 22:58:20.128963 kernel: NET: Registered PF_PACKET protocol family Nov 23 22:58:20.128989 kernel: Key type dns_resolver registered Nov 23 22:58:20.129007 kernel: registered taskstats version 1 Nov 23 22:58:20.129025 kernel: Loading compiled-in X.509 certificates Nov 23 22:58:20.129043 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 98b0841f2908e51633cd38699ad12796cadb7bd1' Nov 23 22:58:20.129065 kernel: Demotion targets for Node 0: null Nov 23 22:58:20.129082 kernel: Key type .fscrypt registered Nov 23 22:58:20.129099 kernel: Key type fscrypt-provisioning registered Nov 23 22:58:20.129116 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 22:58:20.129134 kernel: ima: Allocated hash algorithm: sha1 Nov 23 22:58:20.129156 kernel: ima: No architecture policies found Nov 23 22:58:20.129174 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 22:58:20.129191 kernel: clk: Disabling unused clocks Nov 23 22:58:20.129209 kernel: PM: genpd: Disabling unused power domains Nov 23 22:58:20.129226 kernel: Warning: unable to open an initial console. Nov 23 22:58:20.129244 kernel: Freeing unused kernel memory: 39552K Nov 23 22:58:20.129261 kernel: Run /init as init process Nov 23 22:58:20.129278 kernel: with arguments: Nov 23 22:58:20.129296 kernel: /init Nov 23 22:58:20.129316 kernel: with environment: Nov 23 22:58:20.129333 kernel: HOME=/ Nov 23 22:58:20.129351 kernel: TERM=linux Nov 23 22:58:20.129370 systemd[1]: Successfully made /usr/ read-only. Nov 23 22:58:20.129393 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:58:20.129413 systemd[1]: Detected virtualization amazon. Nov 23 22:58:20.129432 systemd[1]: Detected architecture arm64. Nov 23 22:58:20.129454 systemd[1]: Running in initrd. Nov 23 22:58:20.129473 systemd[1]: No hostname configured, using default hostname. Nov 23 22:58:20.129492 systemd[1]: Hostname set to . Nov 23 22:58:20.129511 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:58:20.129529 systemd[1]: Queued start job for default target initrd.target. Nov 23 22:58:20.129548 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:58:20.129567 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:58:20.129587 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 22:58:20.129610 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:58:20.129630 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 22:58:20.129651 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 22:58:20.129672 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 22:58:20.129691 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 22:58:20.129710 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:58:20.129774 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:58:20.129803 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:58:20.129823 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:58:20.129842 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:58:20.129861 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:58:20.129879 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:58:20.129899 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:58:20.129918 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 22:58:20.129937 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 22:58:20.129955 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:58:20.129979 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:58:20.129998 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:58:20.130016 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:58:20.130035 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 22:58:20.130054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:58:20.130073 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 22:58:20.130092 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 22:58:20.130111 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 22:58:20.130134 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:58:20.130154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:58:20.130172 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:58:20.130192 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 22:58:20.130212 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:58:20.130235 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 22:58:20.130254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 22:58:20.130314 systemd-journald[258]: Collecting audit messages is disabled. Nov 23 22:58:20.130356 systemd-journald[258]: Journal started Nov 23 22:58:20.130396 systemd-journald[258]: Runtime Journal (/run/log/journal/ec23404a6f2856cfbfc7bfecb7f7e58c) is 8M, max 75.3M, 67.3M free. Nov 23 22:58:20.112836 systemd-modules-load[260]: Inserted module 'overlay' Nov 23 22:58:20.138109 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:58:20.140373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:58:20.155671 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 22:58:20.159386 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 22:58:20.164063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:58:20.170991 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:58:20.179872 systemd-modules-load[260]: Inserted module 'br_netfilter' Nov 23 22:58:20.182435 kernel: Bridge firewalling registered Nov 23 22:58:20.184225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:58:20.196960 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:58:20.207775 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:58:20.230214 systemd-tmpfiles[276]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 22:58:20.233930 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:58:20.240132 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 22:58:20.254289 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:58:20.266514 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:58:20.279668 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:58:20.290956 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:58:20.301223 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:58:20.390596 systemd-resolved[307]: Positive Trust Anchors: Nov 23 22:58:20.390633 systemd-resolved[307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:58:20.390694 systemd-resolved[307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:58:20.473763 kernel: SCSI subsystem initialized Nov 23 22:58:20.481760 kernel: Loading iSCSI transport class v2.0-870. Nov 23 22:58:20.493763 kernel: iscsi: registered transport (tcp) Nov 23 22:58:20.516054 kernel: iscsi: registered transport (qla4xxx) Nov 23 22:58:20.516134 kernel: QLogic iSCSI HBA Driver Nov 23 22:58:20.550903 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:58:20.595369 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:58:20.606920 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:58:20.659830 kernel: random: crng init done Nov 23 22:58:20.660066 systemd-resolved[307]: Defaulting to hostname 'linux'. Nov 23 22:58:20.663959 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:58:20.668878 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:58:20.698980 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 22:58:20.705050 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 22:58:20.789790 kernel: raid6: neonx8 gen() 6493 MB/s Nov 23 22:58:20.806763 kernel: raid6: neonx4 gen() 6442 MB/s Nov 23 22:58:20.823760 kernel: raid6: neonx2 gen() 5352 MB/s Nov 23 22:58:20.840764 kernel: raid6: neonx1 gen() 3922 MB/s Nov 23 22:58:20.858761 kernel: raid6: int64x8 gen() 3635 MB/s Nov 23 22:58:20.875765 kernel: raid6: int64x4 gen() 3678 MB/s Nov 23 22:58:20.892760 kernel: raid6: int64x2 gen() 3565 MB/s Nov 23 22:58:20.910787 kernel: raid6: int64x1 gen() 2734 MB/s Nov 23 22:58:20.910827 kernel: raid6: using algorithm neonx8 gen() 6493 MB/s Nov 23 22:58:20.929893 kernel: raid6: .... xor() 4749 MB/s, rmw enabled Nov 23 22:58:20.929944 kernel: raid6: using neon recovery algorithm Nov 23 22:58:20.938623 kernel: xor: measuring software checksum speed Nov 23 22:58:20.938677 kernel: 8regs : 12925 MB/sec Nov 23 22:58:20.941143 kernel: 32regs : 12090 MB/sec Nov 23 22:58:20.941174 kernel: arm64_neon : 8876 MB/sec Nov 23 22:58:20.941198 kernel: xor: using function: 8regs (12925 MB/sec) Nov 23 22:58:21.032775 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 22:58:21.044284 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:58:21.050955 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:58:21.103690 systemd-udevd[508]: Using default interface naming scheme 'v255'. Nov 23 22:58:21.115957 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:58:21.122126 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 22:58:21.159757 dracut-pre-trigger[511]: rd.md=0: removing MD RAID activation Nov 23 22:58:21.204614 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:58:21.211384 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:58:21.342637 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:58:21.352656 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 22:58:21.509201 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 22:58:21.509270 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 23 22:58:21.519784 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 23 22:58:21.522773 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 23 22:58:21.527392 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 23 22:58:21.527751 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 23 22:58:21.535618 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:58:21.542822 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 23 22:58:21.543161 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:ad:fc:7f:02:a9 Nov 23 22:58:21.536709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:58:21.551666 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 22:58:21.551704 kernel: GPT:9289727 != 33554431 Nov 23 22:58:21.551744 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 22:58:21.543398 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:58:21.556229 kernel: GPT:9289727 != 33554431 Nov 23 22:58:21.558487 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 22:58:21.555116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:58:21.562323 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:58:21.559034 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:58:21.572135 (udev-worker)[565]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:58:21.604761 kernel: nvme nvme0: using unchecked data buffer Nov 23 22:58:21.622014 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:58:21.768093 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 23 22:58:21.775415 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 22:58:21.801683 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 23 22:58:21.827923 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 23 22:58:21.866242 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 23 22:58:21.869318 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 23 22:58:21.878901 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:58:21.882254 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:58:21.889838 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:58:21.896171 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 22:58:21.901797 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 22:58:21.925503 disk-uuid[687]: Primary Header is updated. Nov 23 22:58:21.925503 disk-uuid[687]: Secondary Entries is updated. Nov 23 22:58:21.925503 disk-uuid[687]: Secondary Header is updated. Nov 23 22:58:21.936791 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:58:21.946383 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:58:22.965308 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 22:58:22.967285 disk-uuid[689]: The operation has completed successfully. Nov 23 22:58:23.149817 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 22:58:23.150017 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 22:58:23.678898 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 22:58:23.699065 sh[955]: Success Nov 23 22:58:23.728904 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 22:58:23.728981 kernel: device-mapper: uevent: version 1.0.3 Nov 23 22:58:23.731016 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 22:58:23.745765 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 22:58:23.850406 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 22:58:23.856382 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 22:58:23.872534 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 22:58:23.892797 kernel: BTRFS: device fsid 9fed50bd-c943-4402-9e9a-f39625143eb9 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (978) Nov 23 22:58:23.896536 kernel: BTRFS info (device dm-0): first mount of filesystem 9fed50bd-c943-4402-9e9a-f39625143eb9 Nov 23 22:58:23.896578 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:58:23.935035 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 23 22:58:23.935096 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 22:58:23.936375 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 22:58:23.952268 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 22:58:23.953922 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:58:23.954396 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 22:58:23.955628 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 22:58:23.970938 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 22:58:24.024770 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Nov 23 22:58:24.029692 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:58:24.029786 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:58:24.038606 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 22:58:24.038686 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 22:58:24.047950 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:58:24.050831 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 22:58:24.060844 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 22:58:24.153710 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:58:24.163119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:58:24.233196 systemd-networkd[1147]: lo: Link UP Nov 23 22:58:24.233625 systemd-networkd[1147]: lo: Gained carrier Nov 23 22:58:24.237272 systemd-networkd[1147]: Enumeration completed Nov 23 22:58:24.238213 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:58:24.239498 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:58:24.239506 systemd-networkd[1147]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:58:24.243419 systemd[1]: Reached target network.target - Network. Nov 23 22:58:24.250511 systemd-networkd[1147]: eth0: Link UP Nov 23 22:58:24.250519 systemd-networkd[1147]: eth0: Gained carrier Nov 23 22:58:24.250540 systemd-networkd[1147]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:58:24.292845 systemd-networkd[1147]: eth0: DHCPv4 address 172.31.24.27/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 23 22:58:24.480135 ignition[1069]: Ignition 2.22.0 Nov 23 22:58:24.480165 ignition[1069]: Stage: fetch-offline Nov 23 22:58:24.483809 ignition[1069]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:58:24.483862 ignition[1069]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:58:24.486136 ignition[1069]: Ignition finished successfully Nov 23 22:58:24.490216 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:58:24.499998 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 22:58:24.562877 ignition[1160]: Ignition 2.22.0 Nov 23 22:58:24.562908 ignition[1160]: Stage: fetch Nov 23 22:58:24.564222 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:58:24.564453 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:58:24.564772 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:58:24.594928 ignition[1160]: PUT result: OK Nov 23 22:58:24.599620 ignition[1160]: parsed url from cmdline: "" Nov 23 22:58:24.599638 ignition[1160]: no config URL provided Nov 23 22:58:24.599655 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 22:58:24.599680 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Nov 23 22:58:24.599742 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:58:24.608539 ignition[1160]: PUT result: OK Nov 23 22:58:24.609186 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 23 22:58:24.613031 ignition[1160]: GET result: OK Nov 23 22:58:24.613200 ignition[1160]: parsing config with SHA512: 8a28a7df1ffffa6cf1a09b4acd9d9183f5e24cbf6d6b5f6d2430c60c445b9f1e4bab6d2d128058e5399b18270ae5583e3988e8cddbb516213e98a555c256e3a9 Nov 23 22:58:24.623653 unknown[1160]: fetched base config from "system" Nov 23 22:58:24.623696 unknown[1160]: fetched base config from "system" Nov 23 22:58:24.623712 unknown[1160]: fetched user config from "aws" Nov 23 22:58:24.628597 ignition[1160]: fetch: fetch complete Nov 23 22:58:24.628609 ignition[1160]: fetch: fetch passed Nov 23 22:58:24.628708 ignition[1160]: Ignition finished successfully Nov 23 22:58:24.638668 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 22:58:24.643014 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 22:58:24.707292 ignition[1166]: Ignition 2.22.0 Nov 23 22:58:24.707321 ignition[1166]: Stage: kargs Nov 23 22:58:24.709186 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:58:24.709210 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:58:24.710463 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:58:24.717595 ignition[1166]: PUT result: OK Nov 23 22:58:24.723317 ignition[1166]: kargs: kargs passed Nov 23 22:58:24.723513 ignition[1166]: Ignition finished successfully Nov 23 22:58:24.734788 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 22:58:24.738905 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 22:58:24.792963 ignition[1172]: Ignition 2.22.0 Nov 23 22:58:24.793474 ignition[1172]: Stage: disks Nov 23 22:58:24.794062 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:58:24.794085 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:58:24.794210 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:58:24.798517 ignition[1172]: PUT result: OK Nov 23 22:58:24.811783 ignition[1172]: disks: disks passed Nov 23 22:58:24.812073 ignition[1172]: Ignition finished successfully Nov 23 22:58:24.817773 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 22:58:24.822641 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 22:58:24.826474 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 22:58:24.834085 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:58:24.838343 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:58:24.840649 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:58:24.847049 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 22:58:24.901665 systemd-fsck[1180]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 22:58:24.908320 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 22:58:24.916537 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 22:58:25.052769 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c70a3a7b-80c4-4387-ab29-1bf940859b86 r/w with ordered data mode. Quota mode: none. Nov 23 22:58:25.053217 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 22:58:25.057522 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 22:58:25.064931 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:58:25.073314 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 22:58:25.077415 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 22:58:25.077498 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 22:58:25.077547 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:58:25.109116 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 22:58:25.115323 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 22:58:25.128830 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Nov 23 22:58:25.133818 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:58:25.133888 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:58:25.140993 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 22:58:25.141043 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 22:58:25.144071 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:58:25.281632 initrd-setup-root[1225]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 22:58:25.292353 initrd-setup-root[1232]: cut: /sysroot/etc/group: No such file or directory Nov 23 22:58:25.301822 initrd-setup-root[1239]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 22:58:25.309805 initrd-setup-root[1246]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 22:58:25.450824 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 22:58:25.453119 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 22:58:25.465540 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 22:58:25.486287 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 22:58:25.489975 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:58:25.531006 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 22:58:25.547409 ignition[1314]: INFO : Ignition 2.22.0 Nov 23 22:58:25.547409 ignition[1314]: INFO : Stage: mount Nov 23 22:58:25.551291 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:58:25.551291 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:58:25.551291 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:58:25.559928 ignition[1314]: INFO : PUT result: OK Nov 23 22:58:25.567501 ignition[1314]: INFO : mount: mount passed Nov 23 22:58:25.567501 ignition[1314]: INFO : Ignition finished successfully Nov 23 22:58:25.572310 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 22:58:25.581170 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 22:58:26.055896 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:58:26.104889 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Nov 23 22:58:26.108958 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:58:26.109010 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:58:26.115847 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 22:58:26.115924 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 22:58:26.119352 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:58:26.175772 ignition[1343]: INFO : Ignition 2.22.0 Nov 23 22:58:26.175772 ignition[1343]: INFO : Stage: files Nov 23 22:58:26.175772 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:58:26.175772 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:58:26.184407 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:58:26.187121 ignition[1343]: INFO : PUT result: OK Nov 23 22:58:26.192063 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Nov 23 22:58:26.195076 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 22:58:26.195076 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 22:58:26.205290 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 22:58:26.212143 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 22:58:26.215481 unknown[1343]: wrote ssh authorized keys file for user: core Nov 23 22:58:26.218029 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 22:58:26.222758 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 22:58:26.222758 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 23 22:58:26.256899 systemd-networkd[1147]: eth0: Gained IPv6LL Nov 23 22:58:26.307863 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 22:58:26.422526 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 22:58:26.426841 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 22:58:26.426841 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 22:58:26.426841 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:58:26.426841 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:58:26.426841 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:58:26.426841 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:58:26.426841 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:58:26.426841 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:58:26.458294 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:58:26.458294 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:58:26.458294 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 22:58:26.471814 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 22:58:26.471814 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 22:58:26.471814 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 23 22:58:26.909026 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 22:58:27.308557 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 22:58:27.313621 ignition[1343]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 22:58:27.313621 ignition[1343]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:58:27.325421 ignition[1343]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:58:27.325421 ignition[1343]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 22:58:27.332708 ignition[1343]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 23 22:58:27.332708 ignition[1343]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 22:58:27.332708 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:58:27.332708 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:58:27.332708 ignition[1343]: INFO : files: files passed Nov 23 22:58:27.332708 ignition[1343]: INFO : Ignition finished successfully Nov 23 22:58:27.344560 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 22:58:27.353387 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 22:58:27.367977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 22:58:27.380111 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 22:58:27.382759 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 22:58:27.404116 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:58:27.404116 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:58:27.411493 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:58:27.410660 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:58:27.420951 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 22:58:27.424993 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 22:58:27.504841 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 22:58:27.505877 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 22:58:27.512862 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 22:58:27.516884 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 22:58:27.519535 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 22:58:27.527502 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 22:58:27.567851 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:58:27.575515 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 22:58:27.615136 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:58:27.620721 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:58:27.626317 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 22:58:27.628651 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 22:58:27.628989 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:58:27.636161 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 22:58:27.640880 systemd[1]: Stopped target basic.target - Basic System. Nov 23 22:58:27.643384 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 22:58:27.650013 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:58:27.657653 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 22:58:27.660479 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:58:27.667937 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 22:58:27.670893 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:58:27.678751 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 22:58:27.683528 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 22:58:27.686243 systemd[1]: Stopped target swap.target - Swaps. Nov 23 22:58:27.692200 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 22:58:27.692439 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:58:27.700015 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:58:27.705185 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:58:27.708099 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 22:58:27.712973 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:58:27.716251 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 22:58:27.716472 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 22:58:27.725765 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 22:58:27.726123 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:58:27.728658 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 22:58:27.728920 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 22:58:27.738886 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 22:58:27.742117 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 22:58:27.742408 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:58:27.763206 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 22:58:27.766117 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 22:58:27.770340 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:58:27.773924 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 22:58:27.774234 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:58:27.797244 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 22:58:27.798685 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 22:58:27.818341 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 22:58:27.829688 ignition[1397]: INFO : Ignition 2.22.0 Nov 23 22:58:27.829688 ignition[1397]: INFO : Stage: umount Nov 23 22:58:27.834115 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:58:27.834115 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 22:58:27.834115 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 22:58:27.842886 ignition[1397]: INFO : PUT result: OK Nov 23 22:58:27.846671 ignition[1397]: INFO : umount: umount passed Nov 23 22:58:27.848745 ignition[1397]: INFO : Ignition finished successfully Nov 23 22:58:27.854141 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 22:58:27.854395 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 22:58:27.858642 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 22:58:27.858763 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 22:58:27.861558 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 22:58:27.861654 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 22:58:27.868091 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 22:58:27.868175 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 22:58:27.873608 systemd[1]: Stopped target network.target - Network. Nov 23 22:58:27.878532 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 22:58:27.878643 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:58:27.882679 systemd[1]: Stopped target paths.target - Path Units. Nov 23 22:58:27.886167 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 22:58:27.888854 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:58:27.891700 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 22:58:27.897116 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 22:58:27.902343 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 22:58:27.902426 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:58:27.907802 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 22:58:27.907890 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:58:27.910385 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 22:58:27.910478 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 22:58:27.917253 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 22:58:27.917345 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 22:58:27.920641 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 22:58:27.924699 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 22:58:27.967472 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 22:58:27.967711 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 22:58:27.982046 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 22:58:27.982611 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 22:58:27.983840 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 22:58:27.993584 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 22:58:27.994696 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 22:58:28.001263 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 22:58:28.001348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:58:28.005645 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 22:58:28.015151 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 22:58:28.015281 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:58:28.018270 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 22:58:28.020847 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:58:28.033705 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 22:58:28.035297 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 22:58:28.045044 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 22:58:28.045148 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:58:28.050708 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:58:28.063269 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 22:58:28.063677 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:58:28.076470 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 22:58:28.076680 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 22:58:28.085504 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 22:58:28.085667 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 22:58:28.104544 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 22:58:28.106246 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:58:28.114798 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 22:58:28.114884 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 22:58:28.119185 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 22:58:28.119257 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:58:28.125057 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 22:58:28.125690 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:58:28.132046 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 22:58:28.132163 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 22:58:28.134744 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 22:58:28.134872 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:58:28.146033 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 22:58:28.151998 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 22:58:28.154218 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:58:28.157627 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 22:58:28.157750 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:58:28.164125 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:58:28.164226 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:58:28.185599 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 22:58:28.185741 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 22:58:28.185861 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:58:28.186627 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 22:58:28.189747 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 22:58:28.209782 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 22:58:28.211998 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 22:58:28.215555 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 22:58:28.221385 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 22:58:28.252020 systemd[1]: Switching root. Nov 23 22:58:28.299069 systemd-journald[258]: Journal stopped Nov 23 22:58:30.394052 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Nov 23 22:58:30.394350 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 22:58:30.394398 kernel: SELinux: policy capability open_perms=1 Nov 23 22:58:30.394435 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 22:58:30.394465 kernel: SELinux: policy capability always_check_network=0 Nov 23 22:58:30.394493 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 22:58:30.394530 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 22:58:30.394558 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 22:58:30.394594 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 22:58:30.394624 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 22:58:30.397427 kernel: audit: type=1403 audit(1763938708.579:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 22:58:30.397486 systemd[1]: Successfully loaded SELinux policy in 84.886ms. Nov 23 22:58:30.397525 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.960ms. Nov 23 22:58:30.397559 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:58:30.397596 systemd[1]: Detected virtualization amazon. Nov 23 22:58:30.397627 systemd[1]: Detected architecture arm64. Nov 23 22:58:30.397658 systemd[1]: Detected first boot. Nov 23 22:58:30.397689 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:58:30.397720 zram_generator::config[1446]: No configuration found. Nov 23 22:58:30.397783 kernel: NET: Registered PF_VSOCK protocol family Nov 23 22:58:30.397823 systemd[1]: Populated /etc with preset unit settings. Nov 23 22:58:30.397857 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 22:58:30.397886 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 22:58:30.397923 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 22:58:30.400255 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 22:58:30.403399 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 22:58:30.403454 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 22:58:30.403487 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 22:58:30.403518 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 22:58:30.403548 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 22:58:30.403577 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 22:58:30.403615 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 22:58:30.403646 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 22:58:30.403686 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:58:30.403718 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:58:30.404847 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 22:58:30.404882 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 22:58:30.404914 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 22:58:30.404944 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:58:30.404976 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 23 22:58:30.405013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:58:30.405040 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:58:30.405068 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 22:58:30.405098 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 22:58:30.405126 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 22:58:30.405154 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 22:58:30.405182 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:58:30.405214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:58:30.405246 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:58:30.405277 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:58:30.405306 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 22:58:30.405337 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 22:58:30.405369 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 22:58:30.405398 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:58:30.405431 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:58:30.405459 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:58:30.405489 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 22:58:30.405522 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 22:58:30.405553 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 22:58:30.405581 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 22:58:30.405611 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 22:58:30.405645 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 22:58:30.405673 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 22:58:30.405704 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 22:58:30.419000 systemd[1]: Reached target machines.target - Containers. Nov 23 22:58:30.419061 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 22:58:30.419104 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:58:30.419135 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:58:30.419166 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 22:58:30.419194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:58:30.419222 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:58:30.419252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:58:30.419281 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 22:58:30.419312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:58:30.419344 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 22:58:30.419373 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 22:58:30.421447 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 22:58:30.421482 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 22:58:30.421511 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 22:58:30.421541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:58:30.421569 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:58:30.421598 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:58:30.421633 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:58:30.421664 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 22:58:30.421692 kernel: fuse: init (API version 7.41) Nov 23 22:58:30.421720 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 22:58:30.421788 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:58:30.423088 kernel: ACPI: bus type drm_connector registered Nov 23 22:58:30.429672 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 22:58:30.429709 systemd[1]: Stopped verity-setup.service. Nov 23 22:58:30.435758 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 22:58:30.435816 kernel: loop: module loaded Nov 23 22:58:30.435860 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 22:58:30.435895 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 22:58:30.435924 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 22:58:30.435955 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 22:58:30.435983 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 22:58:30.436012 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:58:30.436040 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 22:58:30.436069 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 22:58:30.436097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:58:30.436126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:58:30.436160 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:58:30.436189 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:58:30.436217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:58:30.436245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:58:30.436274 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 22:58:30.436302 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 22:58:30.436331 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:58:30.436360 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:58:30.436392 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:58:30.436422 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 22:58:30.436450 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 22:58:30.436479 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 22:58:30.436508 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 22:58:30.436539 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:58:30.436569 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 22:58:30.436599 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 22:58:30.436628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:58:30.436660 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 22:58:30.436690 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:58:30.436720 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 22:58:30.436778 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:58:30.436813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:58:30.436842 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 22:58:30.436874 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 22:58:30.436902 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:58:30.436934 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 22:58:30.436962 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 22:58:30.436991 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 22:58:30.437023 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:58:30.437052 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 22:58:30.440907 systemd-journald[1532]: Collecting audit messages is disabled. Nov 23 22:58:30.440993 kernel: loop0: detected capacity change from 0 to 61264 Nov 23 22:58:30.441025 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 22:58:30.441058 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 22:58:30.441090 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 22:58:30.441119 systemd-journald[1532]: Journal started Nov 23 22:58:30.441173 systemd-journald[1532]: Runtime Journal (/run/log/journal/ec23404a6f2856cfbfc7bfecb7f7e58c) is 8M, max 75.3M, 67.3M free. Nov 23 22:58:29.561860 systemd[1]: Queued start job for default target multi-user.target. Nov 23 22:58:29.578600 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 23 22:58:29.579450 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 22:58:30.453298 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:58:30.502938 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 22:58:30.526923 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 22:58:30.573763 kernel: loop1: detected capacity change from 0 to 100632 Nov 23 22:58:30.575284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:58:30.586444 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 22:58:30.589849 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 22:58:30.619721 systemd-journald[1532]: Time spent on flushing to /var/log/journal/ec23404a6f2856cfbfc7bfecb7f7e58c is 65.907ms for 933 entries. Nov 23 22:58:30.619721 systemd-journald[1532]: System Journal (/var/log/journal/ec23404a6f2856cfbfc7bfecb7f7e58c) is 8M, max 195.6M, 187.6M free. Nov 23 22:58:30.709968 systemd-journald[1532]: Received client request to flush runtime journal. Nov 23 22:58:30.710044 kernel: loop2: detected capacity change from 0 to 119840 Nov 23 22:58:30.614943 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 22:58:30.625815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:58:30.638887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:58:30.716139 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 22:58:30.730365 kernel: loop3: detected capacity change from 0 to 211168 Nov 23 22:58:30.749276 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Nov 23 22:58:30.749308 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Nov 23 22:58:30.764697 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:58:30.861312 kernel: loop4: detected capacity change from 0 to 61264 Nov 23 22:58:30.893767 kernel: loop5: detected capacity change from 0 to 100632 Nov 23 22:58:30.924806 kernel: loop6: detected capacity change from 0 to 119840 Nov 23 22:58:30.953828 kernel: loop7: detected capacity change from 0 to 211168 Nov 23 22:58:30.982900 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 23 22:58:30.984663 (sd-merge)[1603]: Merged extensions into '/usr'. Nov 23 22:58:30.997921 systemd[1]: Reload requested from client PID 1561 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 22:58:30.997952 systemd[1]: Reloading... Nov 23 22:58:31.212223 ldconfig[1557]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 22:58:31.227783 zram_generator::config[1630]: No configuration found. Nov 23 22:58:31.634131 systemd[1]: Reloading finished in 635 ms. Nov 23 22:58:31.661801 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 22:58:31.665140 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 22:58:31.668523 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 22:58:31.683826 systemd[1]: Starting ensure-sysext.service... Nov 23 22:58:31.692972 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:58:31.704013 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:58:31.739385 systemd[1]: Reload requested from client PID 1683 ('systemctl') (unit ensure-sysext.service)... Nov 23 22:58:31.739422 systemd[1]: Reloading... Nov 23 22:58:31.748272 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 22:58:31.748873 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 22:58:31.749556 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 22:58:31.751475 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 22:58:31.753472 systemd-tmpfiles[1684]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 22:58:31.755379 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Nov 23 22:58:31.755580 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Nov 23 22:58:31.770784 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:58:31.770803 systemd-tmpfiles[1684]: Skipping /boot Nov 23 22:58:31.810180 systemd-tmpfiles[1684]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:58:31.812813 systemd-tmpfiles[1684]: Skipping /boot Nov 23 22:58:31.833577 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Nov 23 22:58:31.920801 zram_generator::config[1712]: No configuration found. Nov 23 22:58:32.219957 (udev-worker)[1738]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:58:32.470866 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 23 22:58:32.472593 systemd[1]: Reloading finished in 732 ms. Nov 23 22:58:32.491151 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:58:32.495828 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:58:32.565462 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:58:32.572156 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 22:58:32.580132 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 22:58:32.612405 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:58:32.620618 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:58:32.628234 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 22:58:32.654631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:58:32.657824 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:58:32.673990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:58:32.680320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:58:32.682864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:58:32.683101 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:58:32.689978 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 22:58:32.701331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:58:32.701839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:58:32.702154 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:58:32.712003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:58:32.715061 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:58:32.717160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:58:32.717385 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:58:32.717697 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 22:58:32.733625 systemd[1]: Finished ensure-sysext.service. Nov 23 22:58:32.736342 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 22:58:32.758549 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:58:32.758937 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:58:32.805434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:58:32.807853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:58:32.820801 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 22:58:32.829508 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 22:58:32.856980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:58:32.885282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:58:32.887607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:58:32.891356 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:58:32.891697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:58:32.894669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:58:32.894789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:58:32.900878 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 22:58:32.904000 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 22:58:32.923343 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 22:58:32.941404 augenrules[1862]: No rules Nov 23 22:58:32.947518 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:58:32.948129 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:58:33.112830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:58:33.205778 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 22:58:33.298902 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 23 22:58:33.304246 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 22:58:33.368838 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 22:58:33.402289 systemd-networkd[1821]: lo: Link UP Nov 23 22:58:33.402305 systemd-networkd[1821]: lo: Gained carrier Nov 23 22:58:33.405804 systemd-networkd[1821]: Enumeration completed Nov 23 22:58:33.406119 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:58:33.411188 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:58:33.411341 systemd-networkd[1821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:58:33.412989 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 22:58:33.421311 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 22:58:33.425163 systemd-networkd[1821]: eth0: Link UP Nov 23 22:58:33.425448 systemd-networkd[1821]: eth0: Gained carrier Nov 23 22:58:33.425488 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:58:33.434488 systemd-resolved[1822]: Positive Trust Anchors: Nov 23 22:58:33.434525 systemd-resolved[1822]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:58:33.434589 systemd-resolved[1822]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:58:33.443882 systemd-networkd[1821]: eth0: DHCPv4 address 172.31.24.27/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 23 22:58:33.455356 systemd-resolved[1822]: Defaulting to hostname 'linux'. Nov 23 22:58:33.459160 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:58:33.461933 systemd[1]: Reached target network.target - Network. Nov 23 22:58:33.464086 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:58:33.466833 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:58:33.469393 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 22:58:33.472175 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 22:58:33.475499 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 22:58:33.478211 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 22:58:33.481254 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 22:58:33.484090 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 22:58:33.484144 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:58:33.486145 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:58:33.490870 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 22:58:33.495998 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 22:58:33.502320 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 22:58:33.505668 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 22:58:33.509839 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 22:58:33.516461 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 22:58:33.519483 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 22:58:33.524875 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 22:58:33.528140 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 22:58:33.531531 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:58:33.533921 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:58:33.536285 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:58:33.536353 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:58:33.538575 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 22:58:33.543402 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 22:58:33.551340 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 22:58:33.562139 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 22:58:33.566784 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 22:58:33.574647 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 22:58:33.577513 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 22:58:33.585612 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 22:58:33.605317 systemd[1]: Started ntpd.service - Network Time Service. Nov 23 22:58:33.610200 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 22:58:33.618509 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 23 22:58:33.638483 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 22:58:33.647212 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 22:58:33.654420 jq[1970]: false Nov 23 22:58:33.663453 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 22:58:33.667467 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 22:58:33.680284 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 22:58:33.684230 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 22:58:33.689909 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 22:58:33.704006 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 22:58:33.707671 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 22:58:33.708177 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 22:58:33.736770 extend-filesystems[1971]: Found /dev/nvme0n1p6 Nov 23 22:58:33.752436 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 22:58:33.753665 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 22:58:33.760829 extend-filesystems[1971]: Found /dev/nvme0n1p9 Nov 23 22:58:33.766156 extend-filesystems[1971]: Checking size of /dev/nvme0n1p9 Nov 23 22:58:33.804757 extend-filesystems[1971]: Resized partition /dev/nvme0n1p9 Nov 23 22:58:33.819872 extend-filesystems[2012]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 22:58:33.823300 jq[1986]: true Nov 23 22:58:33.844775 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 23 22:58:33.853703 tar[1989]: linux-arm64/LICENSE Nov 23 22:58:33.857506 tar[1989]: linux-arm64/helm Nov 23 22:58:33.856709 (ntainerd)[2015]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 22:58:33.859017 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 22:58:33.865196 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 22:58:33.868490 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 23 22:58:33.924678 update_engine[1985]: I20251123 22:58:33.924260 1985 main.cc:92] Flatcar Update Engine starting Nov 23 22:58:33.968968 jq[2018]: true Nov 23 22:58:34.004086 ntpd[1973]: 23 Nov 22:58:33 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:58:34.004086 ntpd[1973]: 23 Nov 22:58:33 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:58:33.972317 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 22:58:33.971425 dbus-daemon[1968]: [system] SELinux support is enabled Nov 23 22:58:33.981458 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 22:58:33.999588 ntpd[1973]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:58:33.981505 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 22:58:33.999695 ntpd[1973]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:58:33.984558 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 22:58:34.017780 ntpd[1973]: 23 Nov 22:58:33 ntpd[1973]: ---------------------------------------------------- Nov 23 22:58:34.017780 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:58:34.017780 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:58:34.017780 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: corporation. Support and training for ntp-4 are Nov 23 22:58:34.017780 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: available at https://www.nwtime.org/support Nov 23 22:58:34.017780 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: ---------------------------------------------------- Nov 23 22:58:33.999714 ntpd[1973]: ---------------------------------------------------- Nov 23 22:58:33.984595 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 22:58:34.022262 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: proto: precision = 0.096 usec (-23) Nov 23 22:58:34.001433 dbus-daemon[1968]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1821 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 23 22:58:34.012820 ntpd[1973]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:58:34.012856 ntpd[1973]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:58:34.012874 ntpd[1973]: corporation. Support and training for ntp-4 are Nov 23 22:58:34.012890 ntpd[1973]: available at https://www.nwtime.org/support Nov 23 22:58:34.012906 ntpd[1973]: ---------------------------------------------------- Nov 23 22:58:34.020016 ntpd[1973]: proto: precision = 0.096 usec (-23) Nov 23 22:58:34.027305 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 23 22:58:34.035485 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: basedate set to 2025-11-11 Nov 23 22:58:34.035485 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: gps base set to 2025-11-16 (week 2393) Nov 23 22:58:34.035485 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:58:34.035485 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:58:34.035485 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:58:34.035485 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: Listen normally on 3 eth0 172.31.24.27:123 Nov 23 22:58:34.035485 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: Listen normally on 4 lo [::1]:123 Nov 23 22:58:34.035485 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: bind(21) AF_INET6 [fe80::4ad:fcff:fe7f:2a9%2]:123 flags 0x811 failed: Cannot assign requested address Nov 23 22:58:34.035485 ntpd[1973]: 23 Nov 22:58:34 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::4ad:fcff:fe7f:2a9%2]:123 Nov 23 22:58:34.031925 ntpd[1973]: basedate set to 2025-11-11 Nov 23 22:58:34.047015 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 23 22:58:34.031958 ntpd[1973]: gps base set to 2025-11-16 (week 2393) Nov 23 22:58:34.052795 systemd-coredump[2034]: Process 1973 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 23 22:58:34.032141 ntpd[1973]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:58:34.032185 ntpd[1973]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:58:34.032471 ntpd[1973]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:58:34.032515 ntpd[1973]: Listen normally on 3 eth0 172.31.24.27:123 Nov 23 22:58:34.032560 ntpd[1973]: Listen normally on 4 lo [::1]:123 Nov 23 22:58:34.032605 ntpd[1973]: bind(21) AF_INET6 [fe80::4ad:fcff:fe7f:2a9%2]:123 flags 0x811 failed: Cannot assign requested address Nov 23 22:58:34.032640 ntpd[1973]: unable to create socket on eth0 (5) for [fe80::4ad:fcff:fe7f:2a9%2]:123 Nov 23 22:58:34.059290 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 23 22:58:34.079593 update_engine[1985]: I20251123 22:58:34.078058 1985 update_check_scheduler.cc:74] Next update check in 10m34s Nov 23 22:58:34.069161 systemd[1]: Started systemd-coredump@0-2034-0.service - Process Core Dump (PID 2034/UID 0). Nov 23 22:58:34.072567 systemd[1]: Started update-engine.service - Update Engine. Nov 23 22:58:34.090868 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 22:58:34.135391 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 23 22:58:34.157753 extend-filesystems[2012]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 23 22:58:34.157753 extend-filesystems[2012]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 23 22:58:34.157753 extend-filesystems[2012]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 23 22:58:34.183653 extend-filesystems[1971]: Resized filesystem in /dev/nvme0n1p9 Nov 23 22:58:34.164460 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 22:58:34.165520 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 22:58:34.228215 coreos-metadata[1967]: Nov 23 22:58:34.227 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 23 22:58:34.233968 coreos-metadata[1967]: Nov 23 22:58:34.232 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 23 22:58:34.238859 coreos-metadata[1967]: Nov 23 22:58:34.236 INFO Fetch successful Nov 23 22:58:34.238859 coreos-metadata[1967]: Nov 23 22:58:34.236 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 23 22:58:34.239931 coreos-metadata[1967]: Nov 23 22:58:34.239 INFO Fetch successful Nov 23 22:58:34.239931 coreos-metadata[1967]: Nov 23 22:58:34.239 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 23 22:58:34.244320 coreos-metadata[1967]: Nov 23 22:58:34.244 INFO Fetch successful Nov 23 22:58:34.244320 coreos-metadata[1967]: Nov 23 22:58:34.244 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 23 22:58:34.250771 coreos-metadata[1967]: Nov 23 22:58:34.245 INFO Fetch successful Nov 23 22:58:34.250771 coreos-metadata[1967]: Nov 23 22:58:34.245 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 23 22:58:34.251871 coreos-metadata[1967]: Nov 23 22:58:34.251 INFO Fetch failed with 404: resource not found Nov 23 22:58:34.251871 coreos-metadata[1967]: Nov 23 22:58:34.251 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 23 22:58:34.252665 coreos-metadata[1967]: Nov 23 22:58:34.252 INFO Fetch successful Nov 23 22:58:34.252665 coreos-metadata[1967]: Nov 23 22:58:34.252 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 23 22:58:34.253376 coreos-metadata[1967]: Nov 23 22:58:34.253 INFO Fetch successful Nov 23 22:58:34.253376 coreos-metadata[1967]: Nov 23 22:58:34.253 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 23 22:58:34.254130 coreos-metadata[1967]: Nov 23 22:58:34.254 INFO Fetch successful Nov 23 22:58:34.254130 coreos-metadata[1967]: Nov 23 22:58:34.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 23 22:58:34.258779 coreos-metadata[1967]: Nov 23 22:58:34.254 INFO Fetch successful Nov 23 22:58:34.258779 coreos-metadata[1967]: Nov 23 22:58:34.255 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 23 22:58:34.264032 coreos-metadata[1967]: Nov 23 22:58:34.263 INFO Fetch successful Nov 23 22:58:34.280582 bash[2057]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:58:34.289516 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 22:58:34.299243 systemd[1]: Starting sshkeys.service... Nov 23 22:58:34.407108 systemd-logind[1980]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 22:58:34.407598 systemd-logind[1980]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 23 22:58:34.408868 systemd-logind[1980]: New seat seat0. Nov 23 22:58:34.410818 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 22:58:34.467032 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 23 22:58:34.475461 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 23 22:58:34.493778 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 22:58:34.497245 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 22:58:34.516986 sshd_keygen[2024]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 22:58:34.702098 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 22:58:34.718067 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 22:58:34.769011 systemd-networkd[1821]: eth0: Gained IPv6LL Nov 23 22:58:34.782052 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 22:58:34.787310 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 22:58:34.794518 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 23 22:58:34.806829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:58:34.815130 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 22:58:34.829075 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 22:58:34.829501 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 22:58:34.854490 containerd[2015]: time="2025-11-23T22:58:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 22:58:34.855872 containerd[2015]: time="2025-11-23T22:58:34.855666612Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 22:58:34.876758 containerd[2015]: time="2025-11-23T22:58:34.875926704Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.364µs" Nov 23 22:58:34.876758 containerd[2015]: time="2025-11-23T22:58:34.875996016Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 22:58:34.876758 containerd[2015]: time="2025-11-23T22:58:34.876059988Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.877998864Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.878082504Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.878144652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.878304192Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.878341932Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.878800524Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.878847756Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.878880216Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.878912220Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.879097884Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 22:58:34.880755 containerd[2015]: time="2025-11-23T22:58:34.879539880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:58:34.881270 containerd[2015]: time="2025-11-23T22:58:34.879622668Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:58:34.881270 containerd[2015]: time="2025-11-23T22:58:34.879658536Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 22:58:34.881270 containerd[2015]: time="2025-11-23T22:58:34.879715920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 22:58:34.882808 containerd[2015]: time="2025-11-23T22:58:34.881421492Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 22:58:34.882808 containerd[2015]: time="2025-11-23T22:58:34.881602680Z" level=info msg="metadata content store policy set" policy=shared Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.892700316Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.892853460Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.892949772Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.892984704Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893013684Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893040816Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893076048Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893105952Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893135292Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893163156Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893189088Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893218980Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893469108Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 22:58:34.897758 containerd[2015]: time="2025-11-23T22:58:34.893513220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.893545476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.893579952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.893607948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.893635320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.893663052Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.893687496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.893714796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.893774832Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.893805960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.894164808Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.894198216Z" level=info msg="Start snapshots syncer" Nov 23 22:58:34.898428 containerd[2015]: time="2025-11-23T22:58:34.895777248Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 22:58:34.899003 containerd[2015]: time="2025-11-23T22:58:34.896343228Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 22:58:34.899003 containerd[2015]: time="2025-11-23T22:58:34.896435148Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.896802324Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897059424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897117912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897146436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897172536Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897224340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897252696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897280104Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897351084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897379416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897407844Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897477792Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897509004Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:58:34.899263 containerd[2015]: time="2025-11-23T22:58:34.897530760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.897555336Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.897576780Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.897602280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.897642096Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.900453204Z" level=info msg="runtime interface created" Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.900569040Z" level=info msg="created NRI interface" Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.900602664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.900654792Z" level=info msg="Connect containerd service" Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.900714408Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 22:58:34.907958 containerd[2015]: time="2025-11-23T22:58:34.905517336Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 22:58:34.903990 locksmithd[2038]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 22:58:34.965120 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 22:58:35.000797 coreos-metadata[2088]: Nov 23 22:58:35.000 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 23 22:58:35.004963 coreos-metadata[2088]: Nov 23 22:58:35.004 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 23 22:58:35.005203 coreos-metadata[2088]: Nov 23 22:58:35.005 INFO Fetch successful Nov 23 22:58:35.005280 coreos-metadata[2088]: Nov 23 22:58:35.005 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 23 22:58:35.007759 coreos-metadata[2088]: Nov 23 22:58:35.006 INFO Fetch successful Nov 23 22:58:35.021843 unknown[2088]: wrote ssh authorized keys file for user: core Nov 23 22:58:35.090805 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 22:58:35.183462 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 22:58:35.200367 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 23 22:58:35.203594 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 22:58:35.237824 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 22:58:35.273630 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 22:58:35.285359 systemd[1]: Started sshd@0-172.31.24.27:22-139.178.68.195:46570.service - OpenSSH per-connection server daemon (139.178.68.195:46570). Nov 23 22:58:35.290368 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 23 22:58:35.294858 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 23 22:58:35.299804 update-ssh-keys[2184]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:58:35.306634 amazon-ssm-agent[2158]: Initializing new seelog logger Nov 23 22:58:35.302497 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 23 22:58:35.314118 dbus-daemon[1968]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2036 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 23 22:58:35.317005 amazon-ssm-agent[2158]: New Seelog Logger Creation Complete Nov 23 22:58:35.317160 amazon-ssm-agent[2158]: 2025/11/23 22:58:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:35.317160 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:35.334565 amazon-ssm-agent[2158]: 2025/11/23 22:58:35 processing appconfig overrides Nov 23 22:58:35.340804 systemd[1]: Finished sshkeys.service. Nov 23 22:58:35.342843 amazon-ssm-agent[2158]: 2025/11/23 22:58:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:35.342843 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:35.342843 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.3381 INFO Proxy environment variables: Nov 23 22:58:35.348507 amazon-ssm-agent[2158]: 2025/11/23 22:58:35 processing appconfig overrides Nov 23 22:58:35.348507 amazon-ssm-agent[2158]: 2025/11/23 22:58:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:35.348507 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:35.348507 amazon-ssm-agent[2158]: 2025/11/23 22:58:35 processing appconfig overrides Nov 23 22:58:35.355161 systemd[1]: Starting polkit.service - Authorization Manager... Nov 23 22:58:35.373859 amazon-ssm-agent[2158]: 2025/11/23 22:58:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:35.373859 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:35.374006 amazon-ssm-agent[2158]: 2025/11/23 22:58:35 processing appconfig overrides Nov 23 22:58:35.408426 systemd-coredump[2037]: Process 1973 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1973: #0 0x0000aaaac2710b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaac26bfe60 n/a (ntpd + 0xfe60) #2 0x0000aaaac26c0240 n/a (ntpd + 0x10240) #3 0x0000aaaac26bbe14 n/a (ntpd + 0xbe14) #4 0x0000aaaac26bd3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaac26c5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaac26b738c n/a (ntpd + 0x738c) #7 0x0000ffff83a42034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff83a42118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaac26b73f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Nov 23 22:58:35.416106 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 23 22:58:35.416410 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 23 22:58:35.432286 systemd[1]: systemd-coredump@0-2034-0.service: Deactivated successfully. Nov 23 22:58:35.446578 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.3423 INFO http_proxy: Nov 23 22:58:35.547107 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.3423 INFO no_proxy: Nov 23 22:58:35.605126 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 23 22:58:35.609853 systemd[1]: Started ntpd.service - Network Time Service. Nov 23 22:58:35.646755 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.3423 INFO https_proxy: Nov 23 22:58:35.671927 containerd[2015]: time="2025-11-23T22:58:35.671769336Z" level=info msg="Start subscribing containerd event" Nov 23 22:58:35.672083 containerd[2015]: time="2025-11-23T22:58:35.671943420Z" level=info msg="Start recovering state" Nov 23 22:58:35.672139 containerd[2015]: time="2025-11-23T22:58:35.672124056Z" level=info msg="Start event monitor" Nov 23 22:58:35.672204 containerd[2015]: time="2025-11-23T22:58:35.672174756Z" level=info msg="Start cni network conf syncer for default" Nov 23 22:58:35.672251 containerd[2015]: time="2025-11-23T22:58:35.672198852Z" level=info msg="Start streaming server" Nov 23 22:58:35.672251 containerd[2015]: time="2025-11-23T22:58:35.672218016Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 22:58:35.672345 containerd[2015]: time="2025-11-23T22:58:35.672259884Z" level=info msg="runtime interface starting up..." Nov 23 22:58:35.672345 containerd[2015]: time="2025-11-23T22:58:35.672278664Z" level=info msg="starting plugins..." Nov 23 22:58:35.672345 containerd[2015]: time="2025-11-23T22:58:35.672308676Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 22:58:35.673689 containerd[2015]: time="2025-11-23T22:58:35.673490232Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 22:58:35.673689 containerd[2015]: time="2025-11-23T22:58:35.673600284Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 22:58:35.674408 containerd[2015]: time="2025-11-23T22:58:35.674359128Z" level=info msg="containerd successfully booted in 0.820636s" Nov 23 22:58:35.674477 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 22:58:35.734582 ntpd[2231]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:58:35.736204 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:14:25 UTC 2025 (1): Starting Nov 23 22:58:35.736204 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:58:35.734696 ntpd[2231]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 22:58:35.734714 ntpd[2231]: ---------------------------------------------------- Nov 23 22:58:35.736768 ntpd[2231]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:58:35.736883 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: ---------------------------------------------------- Nov 23 22:58:35.736883 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: ntp-4 is maintained by Network Time Foundation, Nov 23 22:58:35.736883 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:58:35.736883 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: corporation. Support and training for ntp-4 are Nov 23 22:58:35.736883 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: available at https://www.nwtime.org/support Nov 23 22:58:35.736883 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: ---------------------------------------------------- Nov 23 22:58:35.736791 ntpd[2231]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 22:58:35.736807 ntpd[2231]: corporation. Support and training for ntp-4 are Nov 23 22:58:35.736823 ntpd[2231]: available at https://www.nwtime.org/support Nov 23 22:58:35.736838 ntpd[2231]: ---------------------------------------------------- Nov 23 22:58:35.737883 ntpd[2231]: proto: precision = 0.096 usec (-23) Nov 23 22:58:35.745055 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.3481 INFO Checking if agent identity type OnPrem can be assumed Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: proto: precision = 0.096 usec (-23) Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: basedate set to 2025-11-11 Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: gps base set to 2025-11-16 (week 2393) Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: Listen normally on 3 eth0 172.31.24.27:123 Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: Listen normally on 4 lo [::1]:123 Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: Listen normally on 5 eth0 [fe80::4ad:fcff:fe7f:2a9%2]:123 Nov 23 22:58:35.745123 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: Listening on routing socket on fd #22 for interface updates Nov 23 22:58:35.738205 ntpd[2231]: basedate set to 2025-11-11 Nov 23 22:58:35.738224 ntpd[2231]: gps base set to 2025-11-16 (week 2393) Nov 23 22:58:35.738343 ntpd[2231]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 22:58:35.738386 ntpd[2231]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 22:58:35.738645 ntpd[2231]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 22:58:35.738688 ntpd[2231]: Listen normally on 3 eth0 172.31.24.27:123 Nov 23 22:58:35.739793 ntpd[2231]: Listen normally on 4 lo [::1]:123 Nov 23 22:58:35.739862 ntpd[2231]: Listen normally on 5 eth0 [fe80::4ad:fcff:fe7f:2a9%2]:123 Nov 23 22:58:35.739905 ntpd[2231]: Listening on routing socket on fd #22 for interface updates Nov 23 22:58:35.763222 ntpd[2231]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:58:35.763558 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:58:35.763558 ntpd[2231]: 23 Nov 22:58:35 ntpd[2231]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:58:35.763281 ntpd[2231]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 22:58:35.780770 sshd[2207]: Accepted publickey for core from 139.178.68.195 port 46570 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:35.790238 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:35.810807 polkitd[2212]: Started polkitd version 126 Nov 23 22:58:35.826111 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 22:58:35.829383 polkitd[2212]: Loading rules from directory /etc/polkit-1/rules.d Nov 23 22:58:35.830043 polkitd[2212]: Loading rules from directory /run/polkit-1/rules.d Nov 23 22:58:35.830128 polkitd[2212]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 23 22:58:35.831417 polkitd[2212]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 23 22:58:35.831553 polkitd[2212]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 23 22:58:35.831770 polkitd[2212]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 23 22:58:35.832573 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 22:58:35.842940 polkitd[2212]: Finished loading, compiling and executing 2 rules Nov 23 22:58:35.853491 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.3482 INFO Checking if agent identity type EC2 can be assumed Nov 23 22:58:35.864097 dbus-daemon[1968]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 23 22:58:35.866208 polkitd[2212]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 23 22:58:35.871032 systemd[1]: Started polkit.service - Authorization Manager. Nov 23 22:58:35.871115 systemd-logind[1980]: New session 1 of user core. Nov 23 22:58:35.899851 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 22:58:35.913374 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 22:58:35.955808 systemd-hostnamed[2036]: Hostname set to (transient) Nov 23 22:58:35.956497 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.5919 INFO Agent will take identity from EC2 Nov 23 22:58:35.955485 (systemd)[2245]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 22:58:35.956563 systemd-resolved[1822]: System hostname changed to 'ip-172-31-24-27'. Nov 23 22:58:35.964024 systemd-logind[1980]: New session c1 of user core. Nov 23 22:58:36.054772 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.5998 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 23 22:58:36.155868 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.5998 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 23 22:58:36.252904 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.5998 INFO [amazon-ssm-agent] Starting Core Agent Nov 23 22:58:36.328544 systemd[2245]: Queued start job for default target default.target. Nov 23 22:58:36.338387 systemd[2245]: Created slice app.slice - User Application Slice. Nov 23 22:58:36.338458 systemd[2245]: Reached target paths.target - Paths. Nov 23 22:58:36.338568 systemd[2245]: Reached target timers.target - Timers. Nov 23 22:58:36.344121 systemd[2245]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 22:58:36.355351 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.5999 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 23 22:58:36.391410 systemd[2245]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 22:58:36.391747 systemd[2245]: Reached target sockets.target - Sockets. Nov 23 22:58:36.391911 systemd[2245]: Reached target basic.target - Basic System. Nov 23 22:58:36.392000 systemd[2245]: Reached target default.target - Main User Target. Nov 23 22:58:36.392062 systemd[2245]: Startup finished in 411ms. Nov 23 22:58:36.392076 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 22:58:36.404057 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 22:58:36.455799 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.5999 INFO [Registrar] Starting registrar module Nov 23 22:58:36.557870 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.6032 INFO [EC2Identity] Checking disk for registration info Nov 23 22:58:36.584263 systemd[1]: Started sshd@1-172.31.24.27:22-139.178.68.195:46576.service - OpenSSH per-connection server daemon (139.178.68.195:46576). Nov 23 22:58:36.659030 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.6035 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 23 22:58:36.713443 tar[1989]: linux-arm64/README.md Nov 23 22:58:36.754907 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 22:58:36.760420 amazon-ssm-agent[2158]: 2025-11-23 22:58:35.6035 INFO [EC2Identity] Generating registration keypair Nov 23 22:58:36.888452 sshd[2257]: Accepted publickey for core from 139.178.68.195 port 46576 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:36.891864 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:36.904876 systemd-logind[1980]: New session 2 of user core. Nov 23 22:58:36.915047 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 22:58:36.932869 amazon-ssm-agent[2158]: 2025-11-23 22:58:36.9326 INFO [EC2Identity] Checking write access before registering Nov 23 22:58:36.974790 amazon-ssm-agent[2158]: 2025/11/23 22:58:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:36.974790 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 22:58:36.974790 amazon-ssm-agent[2158]: 2025/11/23 22:58:36 processing appconfig overrides Nov 23 22:58:37.005148 amazon-ssm-agent[2158]: 2025-11-23 22:58:36.9336 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 23 22:58:37.005335 amazon-ssm-agent[2158]: 2025-11-23 22:58:36.9739 INFO [EC2Identity] EC2 registration was successful. Nov 23 22:58:37.005475 amazon-ssm-agent[2158]: 2025-11-23 22:58:36.9739 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 23 22:58:37.005588 amazon-ssm-agent[2158]: 2025-11-23 22:58:36.9740 INFO [CredentialRefresher] credentialRefresher has started Nov 23 22:58:37.005828 amazon-ssm-agent[2158]: 2025-11-23 22:58:36.9741 INFO [CredentialRefresher] Starting credentials refresher loop Nov 23 22:58:37.005828 amazon-ssm-agent[2158]: 2025-11-23 22:58:37.0046 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 23 22:58:37.005828 amazon-ssm-agent[2158]: 2025-11-23 22:58:37.0050 INFO [CredentialRefresher] Credentials ready Nov 23 22:58:37.033620 amazon-ssm-agent[2158]: 2025-11-23 22:58:37.0060 INFO [CredentialRefresher] Next credential rotation will be in 29.9999772609 minutes Nov 23 22:58:37.050788 sshd[2263]: Connection closed by 139.178.68.195 port 46576 Nov 23 22:58:37.053004 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:37.062471 systemd[1]: sshd@1-172.31.24.27:22-139.178.68.195:46576.service: Deactivated successfully. Nov 23 22:58:37.066609 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 22:58:37.069062 systemd-logind[1980]: Session 2 logged out. Waiting for processes to exit. Nov 23 22:58:37.085772 systemd-logind[1980]: Removed session 2. Nov 23 22:58:37.087809 systemd[1]: Started sshd@2-172.31.24.27:22-139.178.68.195:46592.service - OpenSSH per-connection server daemon (139.178.68.195:46592). Nov 23 22:58:37.290596 sshd[2269]: Accepted publickey for core from 139.178.68.195 port 46592 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:37.293051 sshd-session[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:37.300825 systemd-logind[1980]: New session 3 of user core. Nov 23 22:58:37.304948 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 22:58:37.313362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:58:37.318640 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 22:58:37.322442 systemd[1]: Startup finished in 3.702s (kernel) + 8.871s (initrd) + 8.826s (userspace) = 21.400s. Nov 23 22:58:37.332578 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:58:37.444559 sshd[2278]: Connection closed by 139.178.68.195 port 46592 Nov 23 22:58:37.446769 sshd-session[2269]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:37.454432 systemd-logind[1980]: Session 3 logged out. Waiting for processes to exit. Nov 23 22:58:37.455295 systemd[1]: sshd@2-172.31.24.27:22-139.178.68.195:46592.service: Deactivated successfully. Nov 23 22:58:37.461113 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 22:58:37.465673 systemd-logind[1980]: Removed session 3. Nov 23 22:58:38.035847 amazon-ssm-agent[2158]: 2025-11-23 22:58:38.0356 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 23 22:58:38.137911 amazon-ssm-agent[2158]: 2025-11-23 22:58:38.0431 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2294) started Nov 23 22:58:38.238580 amazon-ssm-agent[2158]: 2025-11-23 22:58:38.0431 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 23 22:58:38.257287 kubelet[2277]: E1123 22:58:38.257205 2277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:58:38.262159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:58:38.262926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:58:38.264126 systemd[1]: kubelet.service: Consumed 1.490s CPU time, 258.9M memory peak. Nov 23 22:58:47.486016 systemd[1]: Started sshd@3-172.31.24.27:22-139.178.68.195:51448.service - OpenSSH per-connection server daemon (139.178.68.195:51448). Nov 23 22:58:47.679143 sshd[2309]: Accepted publickey for core from 139.178.68.195 port 51448 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:47.681358 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:47.689256 systemd-logind[1980]: New session 4 of user core. Nov 23 22:58:47.701976 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 22:58:47.826918 sshd[2312]: Connection closed by 139.178.68.195 port 51448 Nov 23 22:58:47.827667 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:47.834994 systemd[1]: sshd@3-172.31.24.27:22-139.178.68.195:51448.service: Deactivated successfully. Nov 23 22:58:47.838672 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 22:58:47.841143 systemd-logind[1980]: Session 4 logged out. Waiting for processes to exit. Nov 23 22:58:47.844012 systemd-logind[1980]: Removed session 4. Nov 23 22:58:47.864824 systemd[1]: Started sshd@4-172.31.24.27:22-139.178.68.195:51454.service - OpenSSH per-connection server daemon (139.178.68.195:51454). Nov 23 22:58:48.055340 sshd[2318]: Accepted publickey for core from 139.178.68.195 port 51454 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:48.057526 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:48.065087 systemd-logind[1980]: New session 5 of user core. Nov 23 22:58:48.074966 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 22:58:48.191910 sshd[2321]: Connection closed by 139.178.68.195 port 51454 Nov 23 22:58:48.192747 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:48.199302 systemd[1]: sshd@4-172.31.24.27:22-139.178.68.195:51454.service: Deactivated successfully. Nov 23 22:58:48.203835 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 22:58:48.205942 systemd-logind[1980]: Session 5 logged out. Waiting for processes to exit. Nov 23 22:58:48.209442 systemd-logind[1980]: Removed session 5. Nov 23 22:58:48.223824 systemd[1]: Started sshd@5-172.31.24.27:22-139.178.68.195:51464.service - OpenSSH per-connection server daemon (139.178.68.195:51464). Nov 23 22:58:48.381722 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 22:58:48.385963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:58:48.414928 sshd[2327]: Accepted publickey for core from 139.178.68.195 port 51464 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:48.417038 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:48.427849 systemd-logind[1980]: New session 6 of user core. Nov 23 22:58:48.436049 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 22:58:48.564317 sshd[2333]: Connection closed by 139.178.68.195 port 51464 Nov 23 22:58:48.567021 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:48.573800 systemd-logind[1980]: Session 6 logged out. Waiting for processes to exit. Nov 23 22:58:48.576034 systemd[1]: sshd@5-172.31.24.27:22-139.178.68.195:51464.service: Deactivated successfully. Nov 23 22:58:48.583382 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 22:58:48.606204 systemd[1]: Started sshd@6-172.31.24.27:22-139.178.68.195:51474.service - OpenSSH per-connection server daemon (139.178.68.195:51474). Nov 23 22:58:48.610268 systemd-logind[1980]: Removed session 6. Nov 23 22:58:48.739128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:58:48.754256 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:58:48.804811 sshd[2339]: Accepted publickey for core from 139.178.68.195 port 51474 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:48.806930 sshd-session[2339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:48.816146 systemd-logind[1980]: New session 7 of user core. Nov 23 22:58:48.823000 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 22:58:48.843598 kubelet[2347]: E1123 22:58:48.843540 2347 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:58:48.851978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:58:48.852478 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:58:48.853169 systemd[1]: kubelet.service: Consumed 314ms CPU time, 106M memory peak. Nov 23 22:58:48.941457 sudo[2355]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 22:58:48.942131 sudo[2355]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:58:48.956649 sudo[2355]: pam_unix(sudo:session): session closed for user root Nov 23 22:58:48.979795 sshd[2352]: Connection closed by 139.178.68.195 port 51474 Nov 23 22:58:48.980860 sshd-session[2339]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:48.987411 systemd-logind[1980]: Session 7 logged out. Waiting for processes to exit. Nov 23 22:58:48.987655 systemd[1]: sshd@6-172.31.24.27:22-139.178.68.195:51474.service: Deactivated successfully. Nov 23 22:58:48.990782 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 22:58:48.995424 systemd-logind[1980]: Removed session 7. Nov 23 22:58:49.015146 systemd[1]: Started sshd@7-172.31.24.27:22-139.178.68.195:51482.service - OpenSSH per-connection server daemon (139.178.68.195:51482). Nov 23 22:58:49.219810 sshd[2361]: Accepted publickey for core from 139.178.68.195 port 51482 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:49.222054 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:49.229498 systemd-logind[1980]: New session 8 of user core. Nov 23 22:58:49.236987 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 22:58:49.339932 sudo[2366]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 22:58:49.340558 sudo[2366]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:58:49.350223 sudo[2366]: pam_unix(sudo:session): session closed for user root Nov 23 22:58:49.359989 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 22:58:49.360573 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:58:49.377457 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:58:49.436710 augenrules[2388]: No rules Nov 23 22:58:49.439230 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:58:49.440850 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:58:49.443863 sudo[2365]: pam_unix(sudo:session): session closed for user root Nov 23 22:58:49.466780 sshd[2364]: Connection closed by 139.178.68.195 port 51482 Nov 23 22:58:49.467751 sshd-session[2361]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:49.473879 systemd-logind[1980]: Session 8 logged out. Waiting for processes to exit. Nov 23 22:58:49.474361 systemd[1]: sshd@7-172.31.24.27:22-139.178.68.195:51482.service: Deactivated successfully. Nov 23 22:58:49.477612 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 22:58:49.481942 systemd-logind[1980]: Removed session 8. Nov 23 22:58:49.504710 systemd[1]: Started sshd@8-172.31.24.27:22-139.178.68.195:51498.service - OpenSSH per-connection server daemon (139.178.68.195:51498). Nov 23 22:58:49.696582 sshd[2397]: Accepted publickey for core from 139.178.68.195 port 51498 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 22:58:49.698901 sshd-session[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:49.707827 systemd-logind[1980]: New session 9 of user core. Nov 23 22:58:49.717027 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 22:58:49.819589 sudo[2401]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 22:58:49.820356 sudo[2401]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:58:50.347597 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 22:58:50.363234 (dockerd)[2418]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 22:58:50.746206 dockerd[2418]: time="2025-11-23T22:58:50.745115029Z" level=info msg="Starting up" Nov 23 22:58:50.749895 dockerd[2418]: time="2025-11-23T22:58:50.749845646Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 22:58:50.770606 dockerd[2418]: time="2025-11-23T22:58:50.770551493Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 22:58:50.820919 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3845115216-merged.mount: Deactivated successfully. Nov 23 22:58:50.851601 dockerd[2418]: time="2025-11-23T22:58:50.851543442Z" level=info msg="Loading containers: start." Nov 23 22:58:50.868794 kernel: Initializing XFRM netlink socket Nov 23 22:58:51.209224 (udev-worker)[2439]: Network interface NamePolicy= disabled on kernel command line. Nov 23 22:58:51.288362 systemd-networkd[1821]: docker0: Link UP Nov 23 22:58:51.299394 dockerd[2418]: time="2025-11-23T22:58:51.299324059Z" level=info msg="Loading containers: done." Nov 23 22:58:51.330482 dockerd[2418]: time="2025-11-23T22:58:51.330419888Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 22:58:51.330702 dockerd[2418]: time="2025-11-23T22:58:51.330582989Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 22:58:51.330793 dockerd[2418]: time="2025-11-23T22:58:51.330753835Z" level=info msg="Initializing buildkit" Nov 23 22:58:51.385452 dockerd[2418]: time="2025-11-23T22:58:51.385372287Z" level=info msg="Completed buildkit initialization" Nov 23 22:58:51.402507 dockerd[2418]: time="2025-11-23T22:58:51.401629479Z" level=info msg="Daemon has completed initialization" Nov 23 22:58:51.402507 dockerd[2418]: time="2025-11-23T22:58:51.402084795Z" level=info msg="API listen on /run/docker.sock" Nov 23 22:58:51.403012 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 22:58:51.812935 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2012958907-merged.mount: Deactivated successfully. Nov 23 22:58:52.670163 containerd[2015]: time="2025-11-23T22:58:52.670041801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 23 22:58:53.293645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1899623639.mount: Deactivated successfully. Nov 23 22:58:54.800847 containerd[2015]: time="2025-11-23T22:58:54.799882456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:54.803394 containerd[2015]: time="2025-11-23T22:58:54.803350713Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=27385704" Nov 23 22:58:54.805973 containerd[2015]: time="2025-11-23T22:58:54.805926925Z" level=info msg="ImageCreate event name:\"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:54.813106 containerd[2015]: time="2025-11-23T22:58:54.813045619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:54.816254 containerd[2015]: time="2025-11-23T22:58:54.816191215Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"27382303\" in 2.146089948s" Nov 23 22:58:54.816392 containerd[2015]: time="2025-11-23T22:58:54.816253754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\"" Nov 23 22:58:54.819099 containerd[2015]: time="2025-11-23T22:58:54.819048823Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 23 22:58:56.379006 containerd[2015]: time="2025-11-23T22:58:56.378926102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:56.380943 containerd[2015]: time="2025-11-23T22:58:56.380875409Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=23551824" Nov 23 22:58:56.383615 containerd[2015]: time="2025-11-23T22:58:56.383544835Z" level=info msg="ImageCreate event name:\"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:56.389413 containerd[2015]: time="2025-11-23T22:58:56.389333504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:56.391811 containerd[2015]: time="2025-11-23T22:58:56.391219430Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"25136308\" in 1.572111778s" Nov 23 22:58:56.391811 containerd[2015]: time="2025-11-23T22:58:56.391279076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\"" Nov 23 22:58:56.392515 containerd[2015]: time="2025-11-23T22:58:56.392477659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 23 22:58:57.699327 containerd[2015]: time="2025-11-23T22:58:57.699257111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:57.700905 containerd[2015]: time="2025-11-23T22:58:57.700865147Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=18296696" Nov 23 22:58:57.702014 containerd[2015]: time="2025-11-23T22:58:57.701957177Z" level=info msg="ImageCreate event name:\"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:57.708059 containerd[2015]: time="2025-11-23T22:58:57.708008345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:57.710777 containerd[2015]: time="2025-11-23T22:58:57.709809257Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"19881198\" in 1.317176036s" Nov 23 22:58:57.710777 containerd[2015]: time="2025-11-23T22:58:57.709866477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\"" Nov 23 22:58:57.711205 containerd[2015]: time="2025-11-23T22:58:57.711167724Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 23 22:58:58.955167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484790465.mount: Deactivated successfully. Nov 23 22:58:58.957643 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 22:58:58.962061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:58:59.344472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:58:59.361330 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:58:59.482595 kubelet[2710]: E1123 22:58:59.482509 2710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:58:59.490884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:58:59.491212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:58:59.492715 systemd[1]: kubelet.service: Consumed 331ms CPU time, 106.9M memory peak. Nov 23 22:58:59.860534 containerd[2015]: time="2025-11-23T22:58:59.860460116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:59.863598 containerd[2015]: time="2025-11-23T22:58:59.863524995Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=28257769" Nov 23 22:58:59.866016 containerd[2015]: time="2025-11-23T22:58:59.865936905Z" level=info msg="ImageCreate event name:\"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:59.872638 containerd[2015]: time="2025-11-23T22:58:59.872545499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:58:59.875046 containerd[2015]: time="2025-11-23T22:58:59.874901629Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"28256788\" in 2.163590217s" Nov 23 22:58:59.875046 containerd[2015]: time="2025-11-23T22:58:59.874986247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\"" Nov 23 22:58:59.876198 containerd[2015]: time="2025-11-23T22:58:59.876082227Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 23 22:59:00.516396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096038308.mount: Deactivated successfully. Nov 23 22:59:01.706055 containerd[2015]: time="2025-11-23T22:59:01.705972852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:01.708296 containerd[2015]: time="2025-11-23T22:59:01.707771291Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Nov 23 22:59:01.710448 containerd[2015]: time="2025-11-23T22:59:01.710392104Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:01.716092 containerd[2015]: time="2025-11-23T22:59:01.716034636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:01.718095 containerd[2015]: time="2025-11-23T22:59:01.718049339Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.841648785s" Nov 23 22:59:01.718263 containerd[2015]: time="2025-11-23T22:59:01.718233367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 23 22:59:01.719346 containerd[2015]: time="2025-11-23T22:59:01.719292092Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 22:59:02.223198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334922136.mount: Deactivated successfully. Nov 23 22:59:02.236769 containerd[2015]: time="2025-11-23T22:59:02.236400481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:59:02.239373 containerd[2015]: time="2025-11-23T22:59:02.239330089Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 23 22:59:02.241411 containerd[2015]: time="2025-11-23T22:59:02.241370881Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:59:02.247464 containerd[2015]: time="2025-11-23T22:59:02.247383966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:59:02.250020 containerd[2015]: time="2025-11-23T22:59:02.249817054Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 530.467717ms" Nov 23 22:59:02.250020 containerd[2015]: time="2025-11-23T22:59:02.249871153Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 22:59:02.250744 containerd[2015]: time="2025-11-23T22:59:02.250659803Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 23 22:59:02.813605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956677629.mount: Deactivated successfully. Nov 23 22:59:04.976173 containerd[2015]: time="2025-11-23T22:59:04.976090280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:04.980376 containerd[2015]: time="2025-11-23T22:59:04.980306355Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Nov 23 22:59:04.983341 containerd[2015]: time="2025-11-23T22:59:04.983250887Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:04.990674 containerd[2015]: time="2025-11-23T22:59:04.990585688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:04.993216 containerd[2015]: time="2025-11-23T22:59:04.992901682Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.742105442s" Nov 23 22:59:04.993216 containerd[2015]: time="2025-11-23T22:59:04.992957810Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 23 22:59:05.991932 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 23 22:59:09.500917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 23 22:59:09.506038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:59:09.854887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:59:09.870464 (kubelet)[2861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:59:09.945327 kubelet[2861]: E1123 22:59:09.945270 2861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:59:09.950385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:59:09.950893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:59:09.952071 systemd[1]: kubelet.service: Consumed 288ms CPU time, 104.9M memory peak. Nov 23 22:59:11.283632 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:59:11.284441 systemd[1]: kubelet.service: Consumed 288ms CPU time, 104.9M memory peak. Nov 23 22:59:11.288393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:59:11.340086 systemd[1]: Reload requested from client PID 2875 ('systemctl') (unit session-9.scope)... Nov 23 22:59:11.340119 systemd[1]: Reloading... Nov 23 22:59:11.588778 zram_generator::config[2922]: No configuration found. Nov 23 22:59:12.050072 systemd[1]: Reloading finished in 709 ms. Nov 23 22:59:12.134212 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 23 22:59:12.134408 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 23 22:59:12.135083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:59:12.135183 systemd[1]: kubelet.service: Consumed 224ms CPU time, 95M memory peak. Nov 23 22:59:12.140259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:59:12.461822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:59:12.480310 (kubelet)[2983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:59:12.548703 kubelet[2983]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:59:12.549199 kubelet[2983]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:59:12.549767 kubelet[2983]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:59:12.549767 kubelet[2983]: I1123 22:59:12.549389 2983 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:59:13.851758 kubelet[2983]: I1123 22:59:13.851466 2983 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 22:59:13.851758 kubelet[2983]: I1123 22:59:13.851516 2983 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:59:13.852609 kubelet[2983]: I1123 22:59:13.852583 2983 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 22:59:13.893700 kubelet[2983]: E1123 22:59:13.893633 2983 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.27:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 23 22:59:13.896210 kubelet[2983]: I1123 22:59:13.896155 2983 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:59:13.915746 kubelet[2983]: I1123 22:59:13.915673 2983 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:59:13.921832 kubelet[2983]: I1123 22:59:13.921790 2983 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 22:59:13.924306 kubelet[2983]: I1123 22:59:13.924233 2983 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:59:13.924571 kubelet[2983]: I1123 22:59:13.924296 2983 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-27","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:59:13.924764 kubelet[2983]: I1123 22:59:13.924698 2983 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:59:13.924764 kubelet[2983]: I1123 22:59:13.924719 2983 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 22:59:13.926345 kubelet[2983]: I1123 22:59:13.926296 2983 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:59:13.932076 kubelet[2983]: I1123 22:59:13.932038 2983 kubelet.go:480] "Attempting to sync node with API server" Nov 23 22:59:13.932205 kubelet[2983]: I1123 22:59:13.932083 2983 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:59:13.932205 kubelet[2983]: I1123 22:59:13.932142 2983 kubelet.go:386] "Adding apiserver pod source" Nov 23 22:59:13.932205 kubelet[2983]: I1123 22:59:13.932171 2983 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:59:13.935548 kubelet[2983]: E1123 22:59:13.934873 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 22:59:13.937087 kubelet[2983]: E1123 22:59:13.937041 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-27&limit=500&resourceVersion=0\": dial tcp 172.31.24.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 22:59:13.937440 kubelet[2983]: I1123 22:59:13.937415 2983 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:59:13.938702 kubelet[2983]: I1123 22:59:13.938671 2983 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 22:59:13.939079 kubelet[2983]: W1123 22:59:13.939044 2983 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 22:59:13.944607 kubelet[2983]: I1123 22:59:13.944244 2983 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 22:59:13.944607 kubelet[2983]: I1123 22:59:13.944311 2983 server.go:1289] "Started kubelet" Nov 23 22:59:13.948982 kubelet[2983]: I1123 22:59:13.948892 2983 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:59:13.950571 kubelet[2983]: I1123 22:59:13.950514 2983 server.go:317] "Adding debug handlers to kubelet server" Nov 23 22:59:13.950868 kubelet[2983]: I1123 22:59:13.950803 2983 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:59:13.951495 kubelet[2983]: I1123 22:59:13.951458 2983 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:59:13.954175 kubelet[2983]: E1123 22:59:13.951861 2983 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.27:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.27:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-27.187ac4f37264becd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-27,UID:ip-172-31-24-27,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-27,},FirstTimestamp:2025-11-23 22:59:13.944272589 +0000 UTC m=+1.456451869,LastTimestamp:2025-11-23 22:59:13.944272589 +0000 UTC m=+1.456451869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-27,}" Nov 23 22:59:13.958598 kubelet[2983]: I1123 22:59:13.958475 2983 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:59:13.959603 kubelet[2983]: I1123 22:59:13.959105 2983 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:59:13.965617 kubelet[2983]: E1123 22:59:13.964949 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-27\" not found" Nov 23 22:59:13.965617 kubelet[2983]: I1123 22:59:13.965008 2983 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 22:59:13.965617 kubelet[2983]: I1123 22:59:13.965330 2983 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 22:59:13.965892 kubelet[2983]: I1123 22:59:13.965688 2983 reconciler.go:26] "Reconciler: start to sync state" Nov 23 22:59:13.966541 kubelet[2983]: E1123 22:59:13.966482 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 22:59:13.966984 kubelet[2983]: E1123 22:59:13.966766 2983 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:59:13.968003 kubelet[2983]: I1123 22:59:13.967948 2983 factory.go:223] Registration of the systemd container factory successfully Nov 23 22:59:13.968151 kubelet[2983]: I1123 22:59:13.968110 2983 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:59:13.971171 kubelet[2983]: I1123 22:59:13.971124 2983 factory.go:223] Registration of the containerd container factory successfully Nov 23 22:59:13.989083 kubelet[2983]: E1123 22:59:13.988996 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-27?timeout=10s\": dial tcp 172.31.24.27:6443: connect: connection refused" interval="200ms" Nov 23 22:59:14.009797 kubelet[2983]: I1123 22:59:14.008473 2983 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:59:14.009797 kubelet[2983]: I1123 22:59:14.008564 2983 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:59:14.009797 kubelet[2983]: I1123 22:59:14.008623 2983 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:59:14.015243 kubelet[2983]: I1123 22:59:14.014817 2983 policy_none.go:49] "None policy: Start" Nov 23 22:59:14.015243 kubelet[2983]: I1123 22:59:14.014864 2983 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 22:59:14.015243 kubelet[2983]: I1123 22:59:14.014888 2983 state_mem.go:35] "Initializing new in-memory state store" Nov 23 22:59:14.029519 kubelet[2983]: I1123 22:59:14.029464 2983 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 22:59:14.030720 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 22:59:14.034476 kubelet[2983]: I1123 22:59:14.034440 2983 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 22:59:14.034831 kubelet[2983]: I1123 22:59:14.034641 2983 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 22:59:14.034831 kubelet[2983]: I1123 22:59:14.034680 2983 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:59:14.034831 kubelet[2983]: I1123 22:59:14.034697 2983 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 22:59:14.035087 kubelet[2983]: E1123 22:59:14.035057 2983 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:59:14.043017 kubelet[2983]: E1123 22:59:14.042971 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 22:59:14.065423 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 22:59:14.066107 kubelet[2983]: E1123 22:59:14.066011 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-27\" not found" Nov 23 22:59:14.073599 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 22:59:14.088759 kubelet[2983]: E1123 22:59:14.088592 2983 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 22:59:14.089779 kubelet[2983]: I1123 22:59:14.089655 2983 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:59:14.089779 kubelet[2983]: I1123 22:59:14.089684 2983 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:59:14.090280 kubelet[2983]: I1123 22:59:14.090254 2983 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:59:14.092668 kubelet[2983]: E1123 22:59:14.092620 2983 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:59:14.092906 kubelet[2983]: E1123 22:59:14.092859 2983 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-27\" not found" Nov 23 22:59:14.157713 systemd[1]: Created slice kubepods-burstable-pod0001c5e8e9bf0350e5c60214c9e8518b.slice - libcontainer container kubepods-burstable-pod0001c5e8e9bf0350e5c60214c9e8518b.slice. Nov 23 22:59:14.166487 kubelet[2983]: I1123 22:59:14.166223 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0001c5e8e9bf0350e5c60214c9e8518b-ca-certs\") pod \"kube-apiserver-ip-172-31-24-27\" (UID: \"0001c5e8e9bf0350e5c60214c9e8518b\") " pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:14.166487 kubelet[2983]: I1123 22:59:14.166302 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0001c5e8e9bf0350e5c60214c9e8518b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-27\" (UID: \"0001c5e8e9bf0350e5c60214c9e8518b\") " pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:14.166487 kubelet[2983]: I1123 22:59:14.166347 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:14.166487 kubelet[2983]: I1123 22:59:14.166388 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:14.166487 kubelet[2983]: I1123 22:59:14.166426 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:14.166999 kubelet[2983]: I1123 22:59:14.166462 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0001c5e8e9bf0350e5c60214c9e8518b-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-27\" (UID: \"0001c5e8e9bf0350e5c60214c9e8518b\") " pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:14.166999 kubelet[2983]: I1123 22:59:14.166497 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:14.166999 kubelet[2983]: I1123 22:59:14.166531 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:14.166999 kubelet[2983]: I1123 22:59:14.166565 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7a7428a1102170b053a4d69f481f705-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-27\" (UID: \"b7a7428a1102170b053a4d69f481f705\") " pod="kube-system/kube-scheduler-ip-172-31-24-27" Nov 23 22:59:14.170451 kubelet[2983]: E1123 22:59:14.170412 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:14.178644 systemd[1]: Created slice kubepods-burstable-podc797bc380fdceedc1a58bd4185e8af6a.slice - libcontainer container kubepods-burstable-podc797bc380fdceedc1a58bd4185e8af6a.slice. Nov 23 22:59:14.185225 kubelet[2983]: E1123 22:59:14.185128 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:14.189061 systemd[1]: Created slice kubepods-burstable-podb7a7428a1102170b053a4d69f481f705.slice - libcontainer container kubepods-burstable-podb7a7428a1102170b053a4d69f481f705.slice. Nov 23 22:59:14.191262 kubelet[2983]: E1123 22:59:14.191204 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-27?timeout=10s\": dial tcp 172.31.24.27:6443: connect: connection refused" interval="400ms" Nov 23 22:59:14.193959 kubelet[2983]: E1123 22:59:14.193683 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:14.194319 kubelet[2983]: I1123 22:59:14.194278 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-27" Nov 23 22:59:14.194983 kubelet[2983]: E1123 22:59:14.194932 2983 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.27:6443/api/v1/nodes\": dial tcp 172.31.24.27:6443: connect: connection refused" node="ip-172-31-24-27" Nov 23 22:59:14.397456 kubelet[2983]: I1123 22:59:14.397079 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-27" Nov 23 22:59:14.397576 kubelet[2983]: E1123 22:59:14.397543 2983 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.27:6443/api/v1/nodes\": dial tcp 172.31.24.27:6443: connect: connection refused" node="ip-172-31-24-27" Nov 23 22:59:14.472794 containerd[2015]: time="2025-11-23T22:59:14.472632748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-27,Uid:0001c5e8e9bf0350e5c60214c9e8518b,Namespace:kube-system,Attempt:0,}" Nov 23 22:59:14.487966 containerd[2015]: time="2025-11-23T22:59:14.487893623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-27,Uid:c797bc380fdceedc1a58bd4185e8af6a,Namespace:kube-system,Attempt:0,}" Nov 23 22:59:14.495386 containerd[2015]: time="2025-11-23T22:59:14.495301265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-27,Uid:b7a7428a1102170b053a4d69f481f705,Namespace:kube-system,Attempt:0,}" Nov 23 22:59:14.520663 containerd[2015]: time="2025-11-23T22:59:14.520445155Z" level=info msg="connecting to shim f8318562d357fafb2a189afcf3f2b269084fe9b857b5dc97ee05beb87cdbd359" address="unix:///run/containerd/s/c9f60b5d658b84522dda2bdee5b2c9fe05a71988bbe6c7e2e070242065dd3eed" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:14.587129 systemd[1]: Started cri-containerd-f8318562d357fafb2a189afcf3f2b269084fe9b857b5dc97ee05beb87cdbd359.scope - libcontainer container f8318562d357fafb2a189afcf3f2b269084fe9b857b5dc97ee05beb87cdbd359. Nov 23 22:59:14.593019 kubelet[2983]: E1123 22:59:14.592957 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-27?timeout=10s\": dial tcp 172.31.24.27:6443: connect: connection refused" interval="800ms" Nov 23 22:59:14.597756 containerd[2015]: time="2025-11-23T22:59:14.597074399Z" level=info msg="connecting to shim 11bdb6d0932d895fb80af6bf6a389903b9430f4ede0e4f1bbc00af7d740bdb06" address="unix:///run/containerd/s/178e946813f84e7ee277baa5d37f7312ba1ffc029fda7c3aae262cfaba30c1ca" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:14.601975 containerd[2015]: time="2025-11-23T22:59:14.601862368Z" level=info msg="connecting to shim ba53400d305bd2c0d66d7e9cf46f0e41a24ec90e67918a93fe58d8c8692ea9d7" address="unix:///run/containerd/s/a94ae28a2049d6f5a060da8ea82da29dccff7e1457260fa105b5f8acd19a7cf1" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:14.699371 systemd[1]: Started cri-containerd-ba53400d305bd2c0d66d7e9cf46f0e41a24ec90e67918a93fe58d8c8692ea9d7.scope - libcontainer container ba53400d305bd2c0d66d7e9cf46f0e41a24ec90e67918a93fe58d8c8692ea9d7. Nov 23 22:59:14.712597 systemd[1]: Started cri-containerd-11bdb6d0932d895fb80af6bf6a389903b9430f4ede0e4f1bbc00af7d740bdb06.scope - libcontainer container 11bdb6d0932d895fb80af6bf6a389903b9430f4ede0e4f1bbc00af7d740bdb06. Nov 23 22:59:14.725080 containerd[2015]: time="2025-11-23T22:59:14.724524645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-27,Uid:0001c5e8e9bf0350e5c60214c9e8518b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8318562d357fafb2a189afcf3f2b269084fe9b857b5dc97ee05beb87cdbd359\"" Nov 23 22:59:14.746824 containerd[2015]: time="2025-11-23T22:59:14.746759625Z" level=info msg="CreateContainer within sandbox \"f8318562d357fafb2a189afcf3f2b269084fe9b857b5dc97ee05beb87cdbd359\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 22:59:14.768296 containerd[2015]: time="2025-11-23T22:59:14.768217517Z" level=info msg="Container dd461f8baaf551e0712916ef6a79afe5b5d715e1ef82868630e1ff559bf4713e: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:14.792777 containerd[2015]: time="2025-11-23T22:59:14.792060737Z" level=info msg="CreateContainer within sandbox \"f8318562d357fafb2a189afcf3f2b269084fe9b857b5dc97ee05beb87cdbd359\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd461f8baaf551e0712916ef6a79afe5b5d715e1ef82868630e1ff559bf4713e\"" Nov 23 22:59:14.794804 containerd[2015]: time="2025-11-23T22:59:14.794235780Z" level=info msg="StartContainer for \"dd461f8baaf551e0712916ef6a79afe5b5d715e1ef82868630e1ff559bf4713e\"" Nov 23 22:59:14.799031 containerd[2015]: time="2025-11-23T22:59:14.798969494Z" level=info msg="connecting to shim dd461f8baaf551e0712916ef6a79afe5b5d715e1ef82868630e1ff559bf4713e" address="unix:///run/containerd/s/c9f60b5d658b84522dda2bdee5b2c9fe05a71988bbe6c7e2e070242065dd3eed" protocol=ttrpc version=3 Nov 23 22:59:14.803908 kubelet[2983]: I1123 22:59:14.803137 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-27" Nov 23 22:59:14.803908 kubelet[2983]: E1123 22:59:14.803635 2983 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.27:6443/api/v1/nodes\": dial tcp 172.31.24.27:6443: connect: connection refused" node="ip-172-31-24-27" Nov 23 22:59:14.816236 kubelet[2983]: E1123 22:59:14.815828 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 22:59:14.871462 containerd[2015]: time="2025-11-23T22:59:14.871387957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-27,Uid:c797bc380fdceedc1a58bd4185e8af6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"11bdb6d0932d895fb80af6bf6a389903b9430f4ede0e4f1bbc00af7d740bdb06\"" Nov 23 22:59:14.873107 systemd[1]: Started cri-containerd-dd461f8baaf551e0712916ef6a79afe5b5d715e1ef82868630e1ff559bf4713e.scope - libcontainer container dd461f8baaf551e0712916ef6a79afe5b5d715e1ef82868630e1ff559bf4713e. Nov 23 22:59:14.887913 containerd[2015]: time="2025-11-23T22:59:14.887618833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-27,Uid:b7a7428a1102170b053a4d69f481f705,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba53400d305bd2c0d66d7e9cf46f0e41a24ec90e67918a93fe58d8c8692ea9d7\"" Nov 23 22:59:14.889972 containerd[2015]: time="2025-11-23T22:59:14.889896143Z" level=info msg="CreateContainer within sandbox \"11bdb6d0932d895fb80af6bf6a389903b9430f4ede0e4f1bbc00af7d740bdb06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 22:59:14.906840 containerd[2015]: time="2025-11-23T22:59:14.904362196Z" level=info msg="CreateContainer within sandbox \"ba53400d305bd2c0d66d7e9cf46f0e41a24ec90e67918a93fe58d8c8692ea9d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 22:59:14.915835 containerd[2015]: time="2025-11-23T22:59:14.915196771Z" level=info msg="Container 74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:14.928956 kubelet[2983]: E1123 22:59:14.928819 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.27:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 22:59:14.933302 containerd[2015]: time="2025-11-23T22:59:14.933251946Z" level=info msg="Container a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:14.943439 containerd[2015]: time="2025-11-23T22:59:14.943359197Z" level=info msg="CreateContainer within sandbox \"11bdb6d0932d895fb80af6bf6a389903b9430f4ede0e4f1bbc00af7d740bdb06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a\"" Nov 23 22:59:14.946781 containerd[2015]: time="2025-11-23T22:59:14.945971415Z" level=info msg="StartContainer for \"74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a\"" Nov 23 22:59:14.947477 containerd[2015]: time="2025-11-23T22:59:14.947432389Z" level=info msg="CreateContainer within sandbox \"ba53400d305bd2c0d66d7e9cf46f0e41a24ec90e67918a93fe58d8c8692ea9d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f\"" Nov 23 22:59:14.950134 containerd[2015]: time="2025-11-23T22:59:14.950058257Z" level=info msg="StartContainer for \"a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f\"" Nov 23 22:59:14.952507 containerd[2015]: time="2025-11-23T22:59:14.952387313Z" level=info msg="connecting to shim 74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a" address="unix:///run/containerd/s/178e946813f84e7ee277baa5d37f7312ba1ffc029fda7c3aae262cfaba30c1ca" protocol=ttrpc version=3 Nov 23 22:59:14.954900 containerd[2015]: time="2025-11-23T22:59:14.954839527Z" level=info msg="connecting to shim a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f" address="unix:///run/containerd/s/a94ae28a2049d6f5a060da8ea82da29dccff7e1457260fa105b5f8acd19a7cf1" protocol=ttrpc version=3 Nov 23 22:59:14.996098 systemd[1]: Started cri-containerd-74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a.scope - libcontainer container 74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a. Nov 23 22:59:15.028065 systemd[1]: Started cri-containerd-a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f.scope - libcontainer container a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f. Nov 23 22:59:15.039039 containerd[2015]: time="2025-11-23T22:59:15.038973277Z" level=info msg="StartContainer for \"dd461f8baaf551e0712916ef6a79afe5b5d715e1ef82868630e1ff559bf4713e\" returns successfully" Nov 23 22:59:15.086344 kubelet[2983]: E1123 22:59:15.086293 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:15.221456 containerd[2015]: time="2025-11-23T22:59:15.221356388Z" level=info msg="StartContainer for \"74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a\" returns successfully" Nov 23 22:59:15.243030 containerd[2015]: time="2025-11-23T22:59:15.242217726Z" level=info msg="StartContainer for \"a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f\" returns successfully" Nov 23 22:59:15.606476 kubelet[2983]: I1123 22:59:15.606422 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-27" Nov 23 22:59:16.100178 kubelet[2983]: E1123 22:59:16.100048 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:16.112852 kubelet[2983]: E1123 22:59:16.112324 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:16.113192 kubelet[2983]: E1123 22:59:16.112718 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:17.112869 kubelet[2983]: E1123 22:59:17.112282 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:17.116542 kubelet[2983]: E1123 22:59:17.116336 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:17.315810 kubelet[2983]: E1123 22:59:17.314364 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:18.116045 kubelet[2983]: E1123 22:59:18.115663 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:18.118357 kubelet[2983]: E1123 22:59:18.118295 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-27\" not found" node="ip-172-31-24-27" Nov 23 22:59:18.247248 kubelet[2983]: I1123 22:59:18.247198 2983 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-27" Nov 23 22:59:18.248451 kubelet[2983]: E1123 22:59:18.248357 2983 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-27\": node \"ip-172-31-24-27\" not found" Nov 23 22:59:18.278329 kubelet[2983]: I1123 22:59:18.278016 2983 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:18.325210 kubelet[2983]: E1123 22:59:18.325166 2983 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-27\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:18.325636 kubelet[2983]: I1123 22:59:18.325438 2983 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:18.331279 kubelet[2983]: E1123 22:59:18.330927 2983 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-27\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:18.331279 kubelet[2983]: I1123 22:59:18.330974 2983 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-27" Nov 23 22:59:18.334199 kubelet[2983]: E1123 22:59:18.334154 2983 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-27\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-27" Nov 23 22:59:18.925625 update_engine[1985]: I20251123 22:59:18.924782 1985 update_attempter.cc:509] Updating boot flags... Nov 23 22:59:18.937868 kubelet[2983]: I1123 22:59:18.937825 2983 apiserver.go:52] "Watching apiserver" Nov 23 22:59:18.965921 kubelet[2983]: I1123 22:59:18.965842 2983 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 22:59:19.117750 kubelet[2983]: I1123 22:59:19.117692 2983 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-27" Nov 23 22:59:22.005523 systemd[1]: Reload requested from client PID 3453 ('systemctl') (unit session-9.scope)... Nov 23 22:59:22.005555 systemd[1]: Reloading... Nov 23 22:59:22.292846 zram_generator::config[3501]: No configuration found. Nov 23 22:59:22.785142 systemd[1]: Reloading finished in 778 ms. Nov 23 22:59:22.852475 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:59:22.869533 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 22:59:22.870132 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:59:22.870218 systemd[1]: kubelet.service: Consumed 2.319s CPU time, 128.6M memory peak. Nov 23 22:59:22.875075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:59:23.247831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:59:23.266377 (kubelet)[3557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:59:23.398943 kubelet[3557]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:59:23.401793 kubelet[3557]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:59:23.401793 kubelet[3557]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:59:23.401793 kubelet[3557]: I1123 22:59:23.400825 3557 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:59:23.424003 kubelet[3557]: I1123 22:59:23.423662 3557 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 22:59:23.424003 kubelet[3557]: I1123 22:59:23.423708 3557 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:59:23.425857 kubelet[3557]: I1123 22:59:23.425798 3557 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 22:59:23.432925 kubelet[3557]: I1123 22:59:23.432867 3557 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 23 22:59:23.454159 kubelet[3557]: I1123 22:59:23.454099 3557 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:59:23.473036 kubelet[3557]: I1123 22:59:23.472966 3557 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:59:23.479569 kubelet[3557]: I1123 22:59:23.479506 3557 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 22:59:23.480245 kubelet[3557]: I1123 22:59:23.480200 3557 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:59:23.480594 kubelet[3557]: I1123 22:59:23.480351 3557 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-27","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:59:23.480833 kubelet[3557]: I1123 22:59:23.480811 3557 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:59:23.480941 kubelet[3557]: I1123 22:59:23.480924 3557 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 22:59:23.481099 kubelet[3557]: I1123 22:59:23.481081 3557 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:59:23.481464 kubelet[3557]: I1123 22:59:23.481444 3557 kubelet.go:480] "Attempting to sync node with API server" Nov 23 22:59:23.481872 kubelet[3557]: I1123 22:59:23.481849 3557 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:59:23.482679 kubelet[3557]: I1123 22:59:23.482651 3557 kubelet.go:386] "Adding apiserver pod source" Nov 23 22:59:23.483853 kubelet[3557]: I1123 22:59:23.482942 3557 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:59:23.491069 kubelet[3557]: I1123 22:59:23.490918 3557 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:59:23.493070 kubelet[3557]: I1123 22:59:23.492899 3557 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 22:59:23.508162 kubelet[3557]: I1123 22:59:23.508130 3557 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 22:59:23.508381 kubelet[3557]: I1123 22:59:23.508363 3557 server.go:1289] "Started kubelet" Nov 23 22:59:23.518391 kubelet[3557]: I1123 22:59:23.518333 3557 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:59:23.524218 kubelet[3557]: I1123 22:59:23.523960 3557 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:59:23.528842 kubelet[3557]: I1123 22:59:23.528374 3557 server.go:317] "Adding debug handlers to kubelet server" Nov 23 22:59:23.541752 kubelet[3557]: I1123 22:59:23.541107 3557 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:59:23.542970 kubelet[3557]: I1123 22:59:23.542626 3557 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:59:23.578554 kubelet[3557]: I1123 22:59:23.546269 3557 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 22:59:23.582458 kubelet[3557]: I1123 22:59:23.546319 3557 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 22:59:23.582458 kubelet[3557]: E1123 22:59:23.548127 3557 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-27\" not found" Nov 23 22:59:23.582458 kubelet[3557]: I1123 22:59:23.580930 3557 reconciler.go:26] "Reconciler: start to sync state" Nov 23 22:59:23.599776 kubelet[3557]: I1123 22:59:23.598999 3557 factory.go:223] Registration of the systemd container factory successfully Nov 23 22:59:23.599776 kubelet[3557]: I1123 22:59:23.599158 3557 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:59:23.605279 kubelet[3557]: I1123 22:59:23.605228 3557 factory.go:223] Registration of the containerd container factory successfully Nov 23 22:59:23.634142 kubelet[3557]: E1123 22:59:23.633962 3557 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:59:23.652767 kubelet[3557]: I1123 22:59:23.652678 3557 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:59:23.665969 kubelet[3557]: I1123 22:59:23.665906 3557 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 22:59:23.671814 kubelet[3557]: I1123 22:59:23.670906 3557 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 22:59:23.671814 kubelet[3557]: I1123 22:59:23.670949 3557 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 22:59:23.671814 kubelet[3557]: I1123 22:59:23.670980 3557 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:59:23.671814 kubelet[3557]: I1123 22:59:23.670994 3557 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 22:59:23.671814 kubelet[3557]: E1123 22:59:23.671067 3557 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:59:23.771979 kubelet[3557]: E1123 22:59:23.771131 3557 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 23 22:59:23.795776 kubelet[3557]: I1123 22:59:23.794276 3557 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:59:23.795776 kubelet[3557]: I1123 22:59:23.794305 3557 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:59:23.795776 kubelet[3557]: I1123 22:59:23.794340 3557 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:59:23.795776 kubelet[3557]: I1123 22:59:23.794565 3557 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 22:59:23.795776 kubelet[3557]: I1123 22:59:23.794584 3557 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 22:59:23.795776 kubelet[3557]: I1123 22:59:23.794615 3557 policy_none.go:49] "None policy: Start" Nov 23 22:59:23.795776 kubelet[3557]: I1123 22:59:23.794631 3557 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 22:59:23.795776 kubelet[3557]: I1123 22:59:23.794650 3557 state_mem.go:35] "Initializing new in-memory state store" Nov 23 22:59:23.795776 kubelet[3557]: I1123 22:59:23.794836 3557 state_mem.go:75] "Updated machine memory state" Nov 23 22:59:23.806617 kubelet[3557]: E1123 22:59:23.806570 3557 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 22:59:23.807438 kubelet[3557]: I1123 22:59:23.806872 3557 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:59:23.807438 kubelet[3557]: I1123 22:59:23.806890 3557 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:59:23.809061 kubelet[3557]: I1123 22:59:23.809013 3557 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:59:23.829077 kubelet[3557]: E1123 22:59:23.829002 3557 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:59:23.937564 kubelet[3557]: I1123 22:59:23.937506 3557 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-27" Nov 23 22:59:23.955575 kubelet[3557]: I1123 22:59:23.955439 3557 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-27" Nov 23 22:59:23.956075 kubelet[3557]: I1123 22:59:23.955717 3557 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-27" Nov 23 22:59:23.974776 kubelet[3557]: I1123 22:59:23.974699 3557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:23.976751 kubelet[3557]: I1123 22:59:23.976599 3557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-27" Nov 23 22:59:23.977750 kubelet[3557]: I1123 22:59:23.977505 3557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:23.992760 kubelet[3557]: I1123 22:59:23.992699 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0001c5e8e9bf0350e5c60214c9e8518b-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-27\" (UID: \"0001c5e8e9bf0350e5c60214c9e8518b\") " pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:23.993878 kubelet[3557]: I1123 22:59:23.993037 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0001c5e8e9bf0350e5c60214c9e8518b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-27\" (UID: \"0001c5e8e9bf0350e5c60214c9e8518b\") " pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:23.993878 kubelet[3557]: I1123 22:59:23.993085 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:23.995762 kubelet[3557]: I1123 22:59:23.994633 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:23.995762 kubelet[3557]: I1123 22:59:23.994751 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7a7428a1102170b053a4d69f481f705-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-27\" (UID: \"b7a7428a1102170b053a4d69f481f705\") " pod="kube-system/kube-scheduler-ip-172-31-24-27" Nov 23 22:59:23.995762 kubelet[3557]: I1123 22:59:23.994799 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0001c5e8e9bf0350e5c60214c9e8518b-ca-certs\") pod \"kube-apiserver-ip-172-31-24-27\" (UID: \"0001c5e8e9bf0350e5c60214c9e8518b\") " pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:23.995762 kubelet[3557]: I1123 22:59:23.994839 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:23.995762 kubelet[3557]: I1123 22:59:23.994880 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:23.996140 kubelet[3557]: I1123 22:59:23.994916 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c797bc380fdceedc1a58bd4185e8af6a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-27\" (UID: \"c797bc380fdceedc1a58bd4185e8af6a\") " pod="kube-system/kube-controller-manager-ip-172-31-24-27" Nov 23 22:59:24.011262 kubelet[3557]: E1123 22:59:24.010944 3557 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-27\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-27" Nov 23 22:59:24.488086 kubelet[3557]: I1123 22:59:24.488021 3557 apiserver.go:52] "Watching apiserver" Nov 23 22:59:24.560185 kubelet[3557]: I1123 22:59:24.559852 3557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-27" podStartSLOduration=1.559127502 podStartE2EDuration="1.559127502s" podCreationTimestamp="2025-11-23 22:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:59:24.553720047 +0000 UTC m=+1.276019625" watchObservedRunningTime="2025-11-23 22:59:24.559127502 +0000 UTC m=+1.281427068" Nov 23 22:59:24.581205 kubelet[3557]: I1123 22:59:24.581114 3557 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 22:59:24.608752 kubelet[3557]: I1123 22:59:24.608170 3557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-27" podStartSLOduration=1.608152106 podStartE2EDuration="1.608152106s" podCreationTimestamp="2025-11-23 22:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:59:24.608091284 +0000 UTC m=+1.330390862" watchObservedRunningTime="2025-11-23 22:59:24.608152106 +0000 UTC m=+1.330451672" Nov 23 22:59:24.608752 kubelet[3557]: I1123 22:59:24.608292 3557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-27" podStartSLOduration=5.608281435 podStartE2EDuration="5.608281435s" podCreationTimestamp="2025-11-23 22:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:59:24.577215297 +0000 UTC m=+1.299514875" watchObservedRunningTime="2025-11-23 22:59:24.608281435 +0000 UTC m=+1.330581013" Nov 23 22:59:24.737312 kubelet[3557]: I1123 22:59:24.736722 3557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:24.738381 kubelet[3557]: I1123 22:59:24.738264 3557 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-27" Nov 23 22:59:24.755357 kubelet[3557]: E1123 22:59:24.755312 3557 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-27\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-27" Nov 23 22:59:24.760857 kubelet[3557]: E1123 22:59:24.760559 3557 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-27\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-27" Nov 23 22:59:26.006988 kubelet[3557]: I1123 22:59:26.006943 3557 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 22:59:26.008378 containerd[2015]: time="2025-11-23T22:59:26.008322571Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 22:59:26.009227 kubelet[3557]: I1123 22:59:26.009192 3557 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 22:59:27.083962 systemd[1]: Created slice kubepods-besteffort-podf35a7c23_0c33_4f7a_bc07_e13d5119b770.slice - libcontainer container kubepods-besteffort-podf35a7c23_0c33_4f7a_bc07_e13d5119b770.slice. Nov 23 22:59:27.113008 kubelet[3557]: I1123 22:59:27.112956 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f35a7c23-0c33-4f7a-bc07-e13d5119b770-kube-proxy\") pod \"kube-proxy-8lzcd\" (UID: \"f35a7c23-0c33-4f7a-bc07-e13d5119b770\") " pod="kube-system/kube-proxy-8lzcd" Nov 23 22:59:27.116274 kubelet[3557]: I1123 22:59:27.115992 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lfnd\" (UniqueName: \"kubernetes.io/projected/f35a7c23-0c33-4f7a-bc07-e13d5119b770-kube-api-access-9lfnd\") pod \"kube-proxy-8lzcd\" (UID: \"f35a7c23-0c33-4f7a-bc07-e13d5119b770\") " pod="kube-system/kube-proxy-8lzcd" Nov 23 22:59:27.116274 kubelet[3557]: I1123 22:59:27.116111 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f35a7c23-0c33-4f7a-bc07-e13d5119b770-xtables-lock\") pod \"kube-proxy-8lzcd\" (UID: \"f35a7c23-0c33-4f7a-bc07-e13d5119b770\") " pod="kube-system/kube-proxy-8lzcd" Nov 23 22:59:27.116566 kubelet[3557]: I1123 22:59:27.116212 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f35a7c23-0c33-4f7a-bc07-e13d5119b770-lib-modules\") pod \"kube-proxy-8lzcd\" (UID: \"f35a7c23-0c33-4f7a-bc07-e13d5119b770\") " pod="kube-system/kube-proxy-8lzcd" Nov 23 22:59:27.291365 systemd[1]: Created slice kubepods-besteffort-pod718a1a35_2ee9_4e4d_b7fb_2565f19bd904.slice - libcontainer container kubepods-besteffort-pod718a1a35_2ee9_4e4d_b7fb_2565f19bd904.slice. Nov 23 22:59:27.319239 kubelet[3557]: I1123 22:59:27.318823 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn9xd\" (UniqueName: \"kubernetes.io/projected/718a1a35-2ee9-4e4d-b7fb-2565f19bd904-kube-api-access-xn9xd\") pod \"tigera-operator-7dcd859c48-tpwn2\" (UID: \"718a1a35-2ee9-4e4d-b7fb-2565f19bd904\") " pod="tigera-operator/tigera-operator-7dcd859c48-tpwn2" Nov 23 22:59:27.319239 kubelet[3557]: I1123 22:59:27.318918 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/718a1a35-2ee9-4e4d-b7fb-2565f19bd904-var-lib-calico\") pod \"tigera-operator-7dcd859c48-tpwn2\" (UID: \"718a1a35-2ee9-4e4d-b7fb-2565f19bd904\") " pod="tigera-operator/tigera-operator-7dcd859c48-tpwn2" Nov 23 22:59:27.396866 containerd[2015]: time="2025-11-23T22:59:27.396535733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8lzcd,Uid:f35a7c23-0c33-4f7a-bc07-e13d5119b770,Namespace:kube-system,Attempt:0,}" Nov 23 22:59:27.442775 containerd[2015]: time="2025-11-23T22:59:27.442289614Z" level=info msg="connecting to shim 0eea6f5c396d9b4c4785e43cda282854c91dcf043e2a84ffddebde99f635e822" address="unix:///run/containerd/s/a41ef060380b9ef9aee936fbb13c4a1015bb3e9af15c6855cdac7d18f72670dc" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:27.497086 systemd[1]: Started cri-containerd-0eea6f5c396d9b4c4785e43cda282854c91dcf043e2a84ffddebde99f635e822.scope - libcontainer container 0eea6f5c396d9b4c4785e43cda282854c91dcf043e2a84ffddebde99f635e822. Nov 23 22:59:27.556504 containerd[2015]: time="2025-11-23T22:59:27.556440238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8lzcd,Uid:f35a7c23-0c33-4f7a-bc07-e13d5119b770,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eea6f5c396d9b4c4785e43cda282854c91dcf043e2a84ffddebde99f635e822\"" Nov 23 22:59:27.571700 containerd[2015]: time="2025-11-23T22:59:27.571633447Z" level=info msg="CreateContainer within sandbox \"0eea6f5c396d9b4c4785e43cda282854c91dcf043e2a84ffddebde99f635e822\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 22:59:27.594819 containerd[2015]: time="2025-11-23T22:59:27.594045527Z" level=info msg="Container 6796ca5d628dba4592f19963e831921ee7289ce27f1494bfdbf7d1a6d7015f86: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:27.605376 containerd[2015]: time="2025-11-23T22:59:27.605316784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tpwn2,Uid:718a1a35-2ee9-4e4d-b7fb-2565f19bd904,Namespace:tigera-operator,Attempt:0,}" Nov 23 22:59:27.615086 containerd[2015]: time="2025-11-23T22:59:27.614939990Z" level=info msg="CreateContainer within sandbox \"0eea6f5c396d9b4c4785e43cda282854c91dcf043e2a84ffddebde99f635e822\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6796ca5d628dba4592f19963e831921ee7289ce27f1494bfdbf7d1a6d7015f86\"" Nov 23 22:59:27.616263 containerd[2015]: time="2025-11-23T22:59:27.616142979Z" level=info msg="StartContainer for \"6796ca5d628dba4592f19963e831921ee7289ce27f1494bfdbf7d1a6d7015f86\"" Nov 23 22:59:27.620530 containerd[2015]: time="2025-11-23T22:59:27.619763316Z" level=info msg="connecting to shim 6796ca5d628dba4592f19963e831921ee7289ce27f1494bfdbf7d1a6d7015f86" address="unix:///run/containerd/s/a41ef060380b9ef9aee936fbb13c4a1015bb3e9af15c6855cdac7d18f72670dc" protocol=ttrpc version=3 Nov 23 22:59:27.660055 containerd[2015]: time="2025-11-23T22:59:27.659463845Z" level=info msg="connecting to shim 509ef28ef7fc1fb9363ac34098119ea5e723e9456e30388b33ffb43c48c4170e" address="unix:///run/containerd/s/e384f5052c5f0eb43377b02f517f7aca09efbeef29764606f1af63526aae2a77" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:27.668174 systemd[1]: Started cri-containerd-6796ca5d628dba4592f19963e831921ee7289ce27f1494bfdbf7d1a6d7015f86.scope - libcontainer container 6796ca5d628dba4592f19963e831921ee7289ce27f1494bfdbf7d1a6d7015f86. Nov 23 22:59:27.749084 systemd[1]: Started cri-containerd-509ef28ef7fc1fb9363ac34098119ea5e723e9456e30388b33ffb43c48c4170e.scope - libcontainer container 509ef28ef7fc1fb9363ac34098119ea5e723e9456e30388b33ffb43c48c4170e. Nov 23 22:59:27.833404 containerd[2015]: time="2025-11-23T22:59:27.833334498Z" level=info msg="StartContainer for \"6796ca5d628dba4592f19963e831921ee7289ce27f1494bfdbf7d1a6d7015f86\" returns successfully" Nov 23 22:59:27.873569 containerd[2015]: time="2025-11-23T22:59:27.873499659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-tpwn2,Uid:718a1a35-2ee9-4e4d-b7fb-2565f19bd904,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"509ef28ef7fc1fb9363ac34098119ea5e723e9456e30388b33ffb43c48c4170e\"" Nov 23 22:59:27.877873 containerd[2015]: time="2025-11-23T22:59:27.877673845Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 22:59:29.182370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667306386.mount: Deactivated successfully. Nov 23 22:59:30.063989 containerd[2015]: time="2025-11-23T22:59:30.063715354Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:30.067001 containerd[2015]: time="2025-11-23T22:59:30.066922961Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 22:59:30.071756 containerd[2015]: time="2025-11-23T22:59:30.069945496Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:30.077619 containerd[2015]: time="2025-11-23T22:59:30.077534729Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:30.081020 containerd[2015]: time="2025-11-23T22:59:30.080956787Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.203220199s" Nov 23 22:59:30.081337 containerd[2015]: time="2025-11-23T22:59:30.081019770Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 22:59:30.087519 containerd[2015]: time="2025-11-23T22:59:30.087446606Z" level=info msg="CreateContainer within sandbox \"509ef28ef7fc1fb9363ac34098119ea5e723e9456e30388b33ffb43c48c4170e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 22:59:30.102668 containerd[2015]: time="2025-11-23T22:59:30.102596714Z" level=info msg="Container 49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:30.114713 containerd[2015]: time="2025-11-23T22:59:30.114653319Z" level=info msg="CreateContainer within sandbox \"509ef28ef7fc1fb9363ac34098119ea5e723e9456e30388b33ffb43c48c4170e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea\"" Nov 23 22:59:30.116311 containerd[2015]: time="2025-11-23T22:59:30.116237007Z" level=info msg="StartContainer for \"49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea\"" Nov 23 22:59:30.118414 containerd[2015]: time="2025-11-23T22:59:30.118355669Z" level=info msg="connecting to shim 49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea" address="unix:///run/containerd/s/e384f5052c5f0eb43377b02f517f7aca09efbeef29764606f1af63526aae2a77" protocol=ttrpc version=3 Nov 23 22:59:30.164033 systemd[1]: Started cri-containerd-49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea.scope - libcontainer container 49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea. Nov 23 22:59:30.223084 containerd[2015]: time="2025-11-23T22:59:30.222995212Z" level=info msg="StartContainer for \"49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea\" returns successfully" Nov 23 22:59:30.421344 kubelet[3557]: I1123 22:59:30.421157 3557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8lzcd" podStartSLOduration=3.4211345890000002 podStartE2EDuration="3.421134589s" podCreationTimestamp="2025-11-23 22:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:59:28.785522705 +0000 UTC m=+5.507822283" watchObservedRunningTime="2025-11-23 22:59:30.421134589 +0000 UTC m=+7.143434167" Nov 23 22:59:39.117599 sudo[2401]: pam_unix(sudo:session): session closed for user root Nov 23 22:59:39.379348 sshd[2400]: Connection closed by 139.178.68.195 port 51498 Nov 23 22:59:39.380218 sshd-session[2397]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:39.389942 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 22:59:39.393032 systemd[1]: session-9.scope: Consumed 9.902s CPU time, 220.8M memory peak. Nov 23 22:59:39.398721 systemd[1]: sshd@8-172.31.24.27:22-139.178.68.195:51498.service: Deactivated successfully. Nov 23 22:59:39.405488 systemd-logind[1980]: Session 9 logged out. Waiting for processes to exit. Nov 23 22:59:39.409260 systemd-logind[1980]: Removed session 9. Nov 23 22:59:54.106681 kubelet[3557]: I1123 22:59:54.106389 3557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-tpwn2" podStartSLOduration=24.901160752 podStartE2EDuration="27.106367836s" podCreationTimestamp="2025-11-23 22:59:27 +0000 UTC" firstStartedPulling="2025-11-23 22:59:27.876910155 +0000 UTC m=+4.599209721" lastFinishedPulling="2025-11-23 22:59:30.082117251 +0000 UTC m=+6.804416805" observedRunningTime="2025-11-23 22:59:30.813962112 +0000 UTC m=+7.536261690" watchObservedRunningTime="2025-11-23 22:59:54.106367836 +0000 UTC m=+30.828667414" Nov 23 22:59:54.126589 systemd[1]: Created slice kubepods-besteffort-pod971b9126_fd1c_41a1_9b21_8682eb14d9b1.slice - libcontainer container kubepods-besteffort-pod971b9126_fd1c_41a1_9b21_8682eb14d9b1.slice. Nov 23 22:59:54.213208 kubelet[3557]: I1123 22:59:54.212970 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/971b9126-fd1c-41a1-9b21-8682eb14d9b1-typha-certs\") pod \"calico-typha-d65df5fb9-pnc7b\" (UID: \"971b9126-fd1c-41a1-9b21-8682eb14d9b1\") " pod="calico-system/calico-typha-d65df5fb9-pnc7b" Nov 23 22:59:54.213208 kubelet[3557]: I1123 22:59:54.213042 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gmj\" (UniqueName: \"kubernetes.io/projected/971b9126-fd1c-41a1-9b21-8682eb14d9b1-kube-api-access-s4gmj\") pod \"calico-typha-d65df5fb9-pnc7b\" (UID: \"971b9126-fd1c-41a1-9b21-8682eb14d9b1\") " pod="calico-system/calico-typha-d65df5fb9-pnc7b" Nov 23 22:59:54.213208 kubelet[3557]: I1123 22:59:54.213088 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/971b9126-fd1c-41a1-9b21-8682eb14d9b1-tigera-ca-bundle\") pod \"calico-typha-d65df5fb9-pnc7b\" (UID: \"971b9126-fd1c-41a1-9b21-8682eb14d9b1\") " pod="calico-system/calico-typha-d65df5fb9-pnc7b" Nov 23 22:59:54.396121 systemd[1]: Created slice kubepods-besteffort-podc6da6ee3_7ec6_4029_ad1d_9f06b12ca7c5.slice - libcontainer container kubepods-besteffort-podc6da6ee3_7ec6_4029_ad1d_9f06b12ca7c5.slice. Nov 23 22:59:54.415165 kubelet[3557]: I1123 22:59:54.415079 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-node-certs\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415165 kubelet[3557]: I1123 22:59:54.415164 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-cni-log-dir\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415396 kubelet[3557]: I1123 22:59:54.415203 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-flexvol-driver-host\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415396 kubelet[3557]: I1123 22:59:54.415242 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-lib-modules\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415396 kubelet[3557]: I1123 22:59:54.415278 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-policysync\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415396 kubelet[3557]: I1123 22:59:54.415312 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-tigera-ca-bundle\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415396 kubelet[3557]: I1123 22:59:54.415352 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-xtables-lock\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415649 kubelet[3557]: I1123 22:59:54.415386 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfx7c\" (UniqueName: \"kubernetes.io/projected/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-kube-api-access-wfx7c\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415649 kubelet[3557]: I1123 22:59:54.415428 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-cni-bin-dir\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415649 kubelet[3557]: I1123 22:59:54.415463 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-var-lib-calico\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415649 kubelet[3557]: I1123 22:59:54.415506 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-cni-net-dir\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.415649 kubelet[3557]: I1123 22:59:54.415540 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5-var-run-calico\") pod \"calico-node-cc75f\" (UID: \"c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5\") " pod="calico-system/calico-node-cc75f" Nov 23 22:59:54.440586 containerd[2015]: time="2025-11-23T22:59:54.440120075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d65df5fb9-pnc7b,Uid:971b9126-fd1c-41a1-9b21-8682eb14d9b1,Namespace:calico-system,Attempt:0,}" Nov 23 22:59:54.511787 containerd[2015]: time="2025-11-23T22:59:54.509822422Z" level=info msg="connecting to shim fbeaf05f5f4432735e81e0a87abf199ea0e6d8626261fd1a254c4d2e0b637d07" address="unix:///run/containerd/s/abfff816a86533e89bcc54b901880252a995386d03949b325e302ffe0f5300b5" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:54.551964 kubelet[3557]: E1123 22:59:54.551906 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.551964 kubelet[3557]: W1123 22:59:54.551949 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.552176 kubelet[3557]: E1123 22:59:54.551986 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.605018 kubelet[3557]: E1123 22:59:54.604895 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.606307 kubelet[3557]: W1123 22:59:54.605867 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.606307 kubelet[3557]: E1123 22:59:54.605971 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.618663 kubelet[3557]: E1123 22:59:54.617616 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 22:59:54.625145 systemd[1]: Started cri-containerd-fbeaf05f5f4432735e81e0a87abf199ea0e6d8626261fd1a254c4d2e0b637d07.scope - libcontainer container fbeaf05f5f4432735e81e0a87abf199ea0e6d8626261fd1a254c4d2e0b637d07. Nov 23 22:59:54.686071 kubelet[3557]: E1123 22:59:54.685812 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.686071 kubelet[3557]: W1123 22:59:54.685850 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.686071 kubelet[3557]: E1123 22:59:54.685883 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.687926 kubelet[3557]: E1123 22:59:54.686934 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.687926 kubelet[3557]: W1123 22:59:54.686997 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.687926 kubelet[3557]: E1123 22:59:54.687095 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.690177 kubelet[3557]: E1123 22:59:54.688877 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.690177 kubelet[3557]: W1123 22:59:54.688915 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.690177 kubelet[3557]: E1123 22:59:54.688974 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.690177 kubelet[3557]: E1123 22:59:54.689942 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.690177 kubelet[3557]: W1123 22:59:54.689970 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.690177 kubelet[3557]: E1123 22:59:54.689999 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.691473 kubelet[3557]: E1123 22:59:54.690707 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.691473 kubelet[3557]: W1123 22:59:54.690905 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.691473 kubelet[3557]: E1123 22:59:54.691071 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.692756 kubelet[3557]: E1123 22:59:54.692201 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.692756 kubelet[3557]: W1123 22:59:54.692282 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.692756 kubelet[3557]: E1123 22:59:54.692444 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.694632 kubelet[3557]: E1123 22:59:54.693620 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.694906 kubelet[3557]: W1123 22:59:54.694656 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.694906 kubelet[3557]: E1123 22:59:54.694697 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.695468 kubelet[3557]: E1123 22:59:54.695406 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.695468 kubelet[3557]: W1123 22:59:54.695444 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.696767 kubelet[3557]: E1123 22:59:54.695475 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.697117 kubelet[3557]: E1123 22:59:54.697072 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.697201 kubelet[3557]: W1123 22:59:54.697112 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.697201 kubelet[3557]: E1123 22:59:54.697145 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.698567 kubelet[3557]: E1123 22:59:54.698517 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.698567 kubelet[3557]: W1123 22:59:54.698563 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.698834 kubelet[3557]: E1123 22:59:54.698597 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.700722 kubelet[3557]: E1123 22:59:54.700637 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.700722 kubelet[3557]: W1123 22:59:54.700682 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.700722 kubelet[3557]: E1123 22:59:54.700719 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.701253 kubelet[3557]: E1123 22:59:54.701206 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.701253 kubelet[3557]: W1123 22:59:54.701241 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.701420 kubelet[3557]: E1123 22:59:54.701273 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.701706 kubelet[3557]: E1123 22:59:54.701669 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.701706 kubelet[3557]: W1123 22:59:54.701698 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.701870 kubelet[3557]: E1123 22:59:54.701739 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.702093 kubelet[3557]: E1123 22:59:54.702059 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.702093 kubelet[3557]: W1123 22:59:54.702087 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.702246 kubelet[3557]: E1123 22:59:54.702110 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.702926 kubelet[3557]: E1123 22:59:54.702870 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.702926 kubelet[3557]: W1123 22:59:54.702906 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.703250 kubelet[3557]: E1123 22:59:54.702935 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.703514 kubelet[3557]: E1123 22:59:54.703475 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.703514 kubelet[3557]: W1123 22:59:54.703507 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.703774 kubelet[3557]: E1123 22:59:54.703531 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.705055 kubelet[3557]: E1123 22:59:54.705005 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.705055 kubelet[3557]: W1123 22:59:54.705043 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.705256 kubelet[3557]: E1123 22:59:54.705076 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.705464 kubelet[3557]: E1123 22:59:54.705421 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.705464 kubelet[3557]: W1123 22:59:54.705449 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.706924 kubelet[3557]: E1123 22:59:54.705472 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.708557 kubelet[3557]: E1123 22:59:54.708493 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.708557 kubelet[3557]: W1123 22:59:54.708540 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.708793 kubelet[3557]: E1123 22:59:54.708574 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.709821 kubelet[3557]: E1123 22:59:54.709197 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.709821 kubelet[3557]: W1123 22:59:54.709815 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.710047 kubelet[3557]: E1123 22:59:54.709852 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.716525 containerd[2015]: time="2025-11-23T22:59:54.716462816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cc75f,Uid:c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5,Namespace:calico-system,Attempt:0,}" Nov 23 22:59:54.720589 kubelet[3557]: E1123 22:59:54.720539 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.720589 kubelet[3557]: W1123 22:59:54.720583 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.720849 kubelet[3557]: E1123 22:59:54.720616 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.720849 kubelet[3557]: I1123 22:59:54.720662 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/39270bf4-b6a6-4d62-8a14-a5e6fd018861-registration-dir\") pod \"csi-node-driver-xflhj\" (UID: \"39270bf4-b6a6-4d62-8a14-a5e6fd018861\") " pod="calico-system/csi-node-driver-xflhj" Nov 23 22:59:54.725418 kubelet[3557]: E1123 22:59:54.725358 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.725418 kubelet[3557]: W1123 22:59:54.725404 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.725620 kubelet[3557]: E1123 22:59:54.725455 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.725841 kubelet[3557]: I1123 22:59:54.725795 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrmc\" (UniqueName: \"kubernetes.io/projected/39270bf4-b6a6-4d62-8a14-a5e6fd018861-kube-api-access-rlrmc\") pod \"csi-node-driver-xflhj\" (UID: \"39270bf4-b6a6-4d62-8a14-a5e6fd018861\") " pod="calico-system/csi-node-driver-xflhj" Nov 23 22:59:54.730770 kubelet[3557]: E1123 22:59:54.729501 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.730770 kubelet[3557]: W1123 22:59:54.729965 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.730770 kubelet[3557]: E1123 22:59:54.730048 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.731025 kubelet[3557]: I1123 22:59:54.730878 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39270bf4-b6a6-4d62-8a14-a5e6fd018861-kubelet-dir\") pod \"csi-node-driver-xflhj\" (UID: \"39270bf4-b6a6-4d62-8a14-a5e6fd018861\") " pod="calico-system/csi-node-driver-xflhj" Nov 23 22:59:54.731601 kubelet[3557]: E1123 22:59:54.731525 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.731601 kubelet[3557]: W1123 22:59:54.731586 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.733676 kubelet[3557]: E1123 22:59:54.731631 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.733676 kubelet[3557]: I1123 22:59:54.731697 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/39270bf4-b6a6-4d62-8a14-a5e6fd018861-varrun\") pod \"csi-node-driver-xflhj\" (UID: \"39270bf4-b6a6-4d62-8a14-a5e6fd018861\") " pod="calico-system/csi-node-driver-xflhj" Nov 23 22:59:54.733844 kubelet[3557]: E1123 22:59:54.733706 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.733844 kubelet[3557]: W1123 22:59:54.733770 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.733844 kubelet[3557]: E1123 22:59:54.733804 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.734960 kubelet[3557]: I1123 22:59:54.733897 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/39270bf4-b6a6-4d62-8a14-a5e6fd018861-socket-dir\") pod \"csi-node-driver-xflhj\" (UID: \"39270bf4-b6a6-4d62-8a14-a5e6fd018861\") " pod="calico-system/csi-node-driver-xflhj" Nov 23 22:59:54.736898 kubelet[3557]: E1123 22:59:54.736841 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.736898 kubelet[3557]: W1123 22:59:54.736892 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.737086 kubelet[3557]: E1123 22:59:54.736928 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.741865 kubelet[3557]: E1123 22:59:54.739308 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.741865 kubelet[3557]: W1123 22:59:54.739352 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.741865 kubelet[3557]: E1123 22:59:54.740038 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.742194 kubelet[3557]: E1123 22:59:54.742015 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.742270 kubelet[3557]: W1123 22:59:54.742194 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.742403 kubelet[3557]: E1123 22:59:54.742362 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.744853 kubelet[3557]: E1123 22:59:54.744805 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.745806 kubelet[3557]: W1123 22:59:54.745059 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.745977 kubelet[3557]: E1123 22:59:54.745840 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.747118 kubelet[3557]: E1123 22:59:54.747067 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.747268 kubelet[3557]: W1123 22:59:54.747109 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.747445 kubelet[3557]: E1123 22:59:54.747286 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.750444 kubelet[3557]: E1123 22:59:54.750392 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.750444 kubelet[3557]: W1123 22:59:54.750434 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.751087 kubelet[3557]: E1123 22:59:54.750484 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.757190 kubelet[3557]: E1123 22:59:54.756904 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.757190 kubelet[3557]: W1123 22:59:54.756940 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.757190 kubelet[3557]: E1123 22:59:54.756974 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.757701 kubelet[3557]: E1123 22:59:54.757583 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.758651 kubelet[3557]: W1123 22:59:54.757961 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.758651 kubelet[3557]: E1123 22:59:54.758008 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.761802 kubelet[3557]: E1123 22:59:54.760363 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.762752 kubelet[3557]: W1123 22:59:54.762411 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.762752 kubelet[3557]: E1123 22:59:54.762483 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.771763 kubelet[3557]: E1123 22:59:54.768572 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.771763 kubelet[3557]: W1123 22:59:54.768611 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.771763 kubelet[3557]: E1123 22:59:54.768645 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.788820 containerd[2015]: time="2025-11-23T22:59:54.787698316Z" level=info msg="connecting to shim 44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0" address="unix:///run/containerd/s/d88e2fe1716d8d49b0da1b4be45a6dbb888f08ff3f67c1fc05da9fdf3fceee57" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:59:54.835926 kubelet[3557]: E1123 22:59:54.835875 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.835926 kubelet[3557]: W1123 22:59:54.835914 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.836115 kubelet[3557]: E1123 22:59:54.835946 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.837317 kubelet[3557]: E1123 22:59:54.837245 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.837452 kubelet[3557]: W1123 22:59:54.837280 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.837452 kubelet[3557]: E1123 22:59:54.837358 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.838720 kubelet[3557]: E1123 22:59:54.838463 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.838720 kubelet[3557]: W1123 22:59:54.838505 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.838720 kubelet[3557]: E1123 22:59:54.838534 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.840750 kubelet[3557]: E1123 22:59:54.840675 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.840750 kubelet[3557]: W1123 22:59:54.840709 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.840953 kubelet[3557]: E1123 22:59:54.840809 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.843780 kubelet[3557]: E1123 22:59:54.841632 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.843780 kubelet[3557]: W1123 22:59:54.841674 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.843780 kubelet[3557]: E1123 22:59:54.841704 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.843780 kubelet[3557]: E1123 22:59:54.843178 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.843780 kubelet[3557]: W1123 22:59:54.843206 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.843780 kubelet[3557]: E1123 22:59:54.843234 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.843780 kubelet[3557]: E1123 22:59:54.843585 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.843780 kubelet[3557]: W1123 22:59:54.843602 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.843780 kubelet[3557]: E1123 22:59:54.843622 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.846144 kubelet[3557]: E1123 22:59:54.846052 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.846144 kubelet[3557]: W1123 22:59:54.846092 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.846144 kubelet[3557]: E1123 22:59:54.846125 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.848533 kubelet[3557]: E1123 22:59:54.848400 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.848533 kubelet[3557]: W1123 22:59:54.848437 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.848533 kubelet[3557]: E1123 22:59:54.848468 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.850160 kubelet[3557]: E1123 22:59:54.850089 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.850485 kubelet[3557]: W1123 22:59:54.850438 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.850695 kubelet[3557]: E1123 22:59:54.850493 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.851161 kubelet[3557]: E1123 22:59:54.851123 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.851161 kubelet[3557]: W1123 22:59:54.851154 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.851950 kubelet[3557]: E1123 22:59:54.851183 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.853086 kubelet[3557]: E1123 22:59:54.852395 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.853086 kubelet[3557]: W1123 22:59:54.852438 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.853086 kubelet[3557]: E1123 22:59:54.852473 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.855465 kubelet[3557]: E1123 22:59:54.854418 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.855465 kubelet[3557]: W1123 22:59:54.854456 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.855465 kubelet[3557]: E1123 22:59:54.854489 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.856913 kubelet[3557]: E1123 22:59:54.856422 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.856913 kubelet[3557]: W1123 22:59:54.856459 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.856913 kubelet[3557]: E1123 22:59:54.856491 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.858753 kubelet[3557]: E1123 22:59:54.858662 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.858753 kubelet[3557]: W1123 22:59:54.858698 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.859206 kubelet[3557]: E1123 22:59:54.859158 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.860111 kubelet[3557]: E1123 22:59:54.860060 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.860111 kubelet[3557]: W1123 22:59:54.860096 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.860274 kubelet[3557]: E1123 22:59:54.860125 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.860935 kubelet[3557]: E1123 22:59:54.860864 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.860935 kubelet[3557]: W1123 22:59:54.860906 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.861138 kubelet[3557]: E1123 22:59:54.860938 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.863621 kubelet[3557]: E1123 22:59:54.861624 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.863621 kubelet[3557]: W1123 22:59:54.861661 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.863621 kubelet[3557]: E1123 22:59:54.861692 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.863621 kubelet[3557]: E1123 22:59:54.862921 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.863621 kubelet[3557]: W1123 22:59:54.862947 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.863621 kubelet[3557]: E1123 22:59:54.862977 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.863621 kubelet[3557]: E1123 22:59:54.863628 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.864226 kubelet[3557]: W1123 22:59:54.863652 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.864226 kubelet[3557]: E1123 22:59:54.863681 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.864595 systemd[1]: Started cri-containerd-44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0.scope - libcontainer container 44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0. Nov 23 22:59:54.866952 kubelet[3557]: E1123 22:59:54.866353 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.866952 kubelet[3557]: W1123 22:59:54.866385 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.866952 kubelet[3557]: E1123 22:59:54.866417 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.868277 kubelet[3557]: E1123 22:59:54.868232 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.868277 kubelet[3557]: W1123 22:59:54.868266 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.868503 kubelet[3557]: E1123 22:59:54.868298 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.873708 kubelet[3557]: E1123 22:59:54.873654 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.873896 kubelet[3557]: W1123 22:59:54.873842 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.873896 kubelet[3557]: E1123 22:59:54.873882 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.875333 kubelet[3557]: E1123 22:59:54.875256 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.876544 kubelet[3557]: W1123 22:59:54.876095 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.876755 kubelet[3557]: E1123 22:59:54.876708 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.877639 kubelet[3557]: E1123 22:59:54.877386 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.877877 kubelet[3557]: W1123 22:59:54.877845 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.878753 kubelet[3557]: E1123 22:59:54.878107 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:54.923908 kubelet[3557]: E1123 22:59:54.923850 3557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:59:54.923908 kubelet[3557]: W1123 22:59:54.923887 3557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:59:54.924125 kubelet[3557]: E1123 22:59:54.923919 3557 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:59:55.076075 containerd[2015]: time="2025-11-23T22:59:55.075852607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cc75f,Uid:c6da6ee3-7ec6-4029-ad1d-9f06b12ca7c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0\"" Nov 23 22:59:55.084124 containerd[2015]: time="2025-11-23T22:59:55.083995016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 22:59:55.092262 containerd[2015]: time="2025-11-23T22:59:55.092058522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d65df5fb9-pnc7b,Uid:971b9126-fd1c-41a1-9b21-8682eb14d9b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbeaf05f5f4432735e81e0a87abf199ea0e6d8626261fd1a254c4d2e0b637d07\"" Nov 23 22:59:55.673033 kubelet[3557]: E1123 22:59:55.672293 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 22:59:56.195092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601431104.mount: Deactivated successfully. Nov 23 22:59:56.385780 containerd[2015]: time="2025-11-23T22:59:56.385369175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:56.387856 containerd[2015]: time="2025-11-23T22:59:56.387812240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Nov 23 22:59:56.389984 containerd[2015]: time="2025-11-23T22:59:56.389914875Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:56.396002 containerd[2015]: time="2025-11-23T22:59:56.395921081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:56.398194 containerd[2015]: time="2025-11-23T22:59:56.398001889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.313920657s" Nov 23 22:59:56.398194 containerd[2015]: time="2025-11-23T22:59:56.398060454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 22:59:56.400397 containerd[2015]: time="2025-11-23T22:59:56.400277638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 22:59:56.408649 containerd[2015]: time="2025-11-23T22:59:56.408575417Z" level=info msg="CreateContainer within sandbox \"44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 22:59:56.430838 containerd[2015]: time="2025-11-23T22:59:56.429466217Z" level=info msg="Container 7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:56.442923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount194806004.mount: Deactivated successfully. Nov 23 22:59:56.457263 containerd[2015]: time="2025-11-23T22:59:56.457080726Z" level=info msg="CreateContainer within sandbox \"44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027\"" Nov 23 22:59:56.460064 containerd[2015]: time="2025-11-23T22:59:56.460000693Z" level=info msg="StartContainer for \"7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027\"" Nov 23 22:59:56.463228 containerd[2015]: time="2025-11-23T22:59:56.463164694Z" level=info msg="connecting to shim 7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027" address="unix:///run/containerd/s/d88e2fe1716d8d49b0da1b4be45a6dbb888f08ff3f67c1fc05da9fdf3fceee57" protocol=ttrpc version=3 Nov 23 22:59:56.507016 systemd[1]: Started cri-containerd-7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027.scope - libcontainer container 7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027. Nov 23 22:59:56.629939 containerd[2015]: time="2025-11-23T22:59:56.629696019Z" level=info msg="StartContainer for \"7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027\" returns successfully" Nov 23 22:59:56.665015 systemd[1]: cri-containerd-7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027.scope: Deactivated successfully. Nov 23 22:59:56.674371 containerd[2015]: time="2025-11-23T22:59:56.674310472Z" level=info msg="received container exit event container_id:\"7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027\" id:\"7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027\" pid:4164 exited_at:{seconds:1763938796 nanos:673545161}" Nov 23 22:59:56.755422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d949e26e5f7063e4b5308a94590c982fd82e1f648749e46afd7f8be9ae6d027-rootfs.mount: Deactivated successfully. Nov 23 22:59:57.672655 kubelet[3557]: E1123 22:59:57.671992 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 22:59:58.410774 containerd[2015]: time="2025-11-23T22:59:58.410321344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:58.412907 containerd[2015]: time="2025-11-23T22:59:58.412855318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Nov 23 22:59:58.415365 containerd[2015]: time="2025-11-23T22:59:58.415288358Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:58.420751 containerd[2015]: time="2025-11-23T22:59:58.419815028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:59:58.421204 containerd[2015]: time="2025-11-23T22:59:58.421161537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.02079832s" Nov 23 22:59:58.421335 containerd[2015]: time="2025-11-23T22:59:58.421307962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 22:59:58.424620 containerd[2015]: time="2025-11-23T22:59:58.424555057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 22:59:58.455992 containerd[2015]: time="2025-11-23T22:59:58.455896780Z" level=info msg="CreateContainer within sandbox \"fbeaf05f5f4432735e81e0a87abf199ea0e6d8626261fd1a254c4d2e0b637d07\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 22:59:58.474751 containerd[2015]: time="2025-11-23T22:59:58.474677946Z" level=info msg="Container 05a4f7511122e8d1801b283fd82b82ba88d3b08c53981f1f1208f72d0055931c: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:58.483647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303832581.mount: Deactivated successfully. Nov 23 22:59:58.501326 containerd[2015]: time="2025-11-23T22:59:58.501168525Z" level=info msg="CreateContainer within sandbox \"fbeaf05f5f4432735e81e0a87abf199ea0e6d8626261fd1a254c4d2e0b637d07\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"05a4f7511122e8d1801b283fd82b82ba88d3b08c53981f1f1208f72d0055931c\"" Nov 23 22:59:58.502841 containerd[2015]: time="2025-11-23T22:59:58.502706013Z" level=info msg="StartContainer for \"05a4f7511122e8d1801b283fd82b82ba88d3b08c53981f1f1208f72d0055931c\"" Nov 23 22:59:58.505565 containerd[2015]: time="2025-11-23T22:59:58.505488788Z" level=info msg="connecting to shim 05a4f7511122e8d1801b283fd82b82ba88d3b08c53981f1f1208f72d0055931c" address="unix:///run/containerd/s/abfff816a86533e89bcc54b901880252a995386d03949b325e302ffe0f5300b5" protocol=ttrpc version=3 Nov 23 22:59:58.549220 systemd[1]: Started cri-containerd-05a4f7511122e8d1801b283fd82b82ba88d3b08c53981f1f1208f72d0055931c.scope - libcontainer container 05a4f7511122e8d1801b283fd82b82ba88d3b08c53981f1f1208f72d0055931c. Nov 23 22:59:58.644657 containerd[2015]: time="2025-11-23T22:59:58.643718477Z" level=info msg="StartContainer for \"05a4f7511122e8d1801b283fd82b82ba88d3b08c53981f1f1208f72d0055931c\" returns successfully" Nov 23 22:59:59.672700 kubelet[3557]: E1123 22:59:59.672603 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 22:59:59.972322 kubelet[3557]: I1123 22:59:59.970180 3557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d65df5fb9-pnc7b" podStartSLOduration=2.642027298 podStartE2EDuration="5.970154177s" podCreationTimestamp="2025-11-23 22:59:54 +0000 UTC" firstStartedPulling="2025-11-23 22:59:55.094406404 +0000 UTC m=+31.816705958" lastFinishedPulling="2025-11-23 22:59:58.422533271 +0000 UTC m=+35.144832837" observedRunningTime="2025-11-23 22:59:59.018043817 +0000 UTC m=+35.740343407" watchObservedRunningTime="2025-11-23 22:59:59.970154177 +0000 UTC m=+36.692453743" Nov 23 23:00:01.422940 containerd[2015]: time="2025-11-23T23:00:01.422830711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:00:01.424819 containerd[2015]: time="2025-11-23T23:00:01.424764949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:00:01.427189 containerd[2015]: time="2025-11-23T23:00:01.427084412Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:00:01.432010 containerd[2015]: time="2025-11-23T23:00:01.431930322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:00:01.434166 containerd[2015]: time="2025-11-23T23:00:01.434115174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.009491983s" Nov 23 23:00:01.434669 containerd[2015]: time="2025-11-23T23:00:01.434364731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:00:01.443806 containerd[2015]: time="2025-11-23T23:00:01.443025163Z" level=info msg="CreateContainer within sandbox \"44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:00:01.469186 containerd[2015]: time="2025-11-23T23:00:01.469121525Z" level=info msg="Container f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:01.491819 containerd[2015]: time="2025-11-23T23:00:01.491768707Z" level=info msg="CreateContainer within sandbox \"44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c\"" Nov 23 23:00:01.494968 containerd[2015]: time="2025-11-23T23:00:01.494866435Z" level=info msg="StartContainer for \"f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c\"" Nov 23 23:00:01.501364 containerd[2015]: time="2025-11-23T23:00:01.501219638Z" level=info msg="connecting to shim f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c" address="unix:///run/containerd/s/d88e2fe1716d8d49b0da1b4be45a6dbb888f08ff3f67c1fc05da9fdf3fceee57" protocol=ttrpc version=3 Nov 23 23:00:01.549086 systemd[1]: Started cri-containerd-f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c.scope - libcontainer container f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c. Nov 23 23:00:01.672426 kubelet[3557]: E1123 23:00:01.672038 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:00:01.701304 containerd[2015]: time="2025-11-23T23:00:01.701059821Z" level=info msg="StartContainer for \"f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c\" returns successfully" Nov 23 23:00:02.931983 containerd[2015]: time="2025-11-23T23:00:02.931712241Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:00:02.936634 systemd[1]: cri-containerd-f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c.scope: Deactivated successfully. Nov 23 23:00:02.937906 systemd[1]: cri-containerd-f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c.scope: Consumed 932ms CPU time, 185.2M memory peak, 165.9M written to disk. Nov 23 23:00:02.944919 containerd[2015]: time="2025-11-23T23:00:02.944624502Z" level=info msg="received container exit event container_id:\"f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c\" id:\"f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c\" pid:4269 exited_at:{seconds:1763938802 nanos:943482503}" Nov 23 23:00:02.988202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5b75ee4f908f900541ba65b3c9675ba33e523798ef7ad10f188b111ed8acc5c-rootfs.mount: Deactivated successfully. Nov 23 23:00:03.016642 kubelet[3557]: I1123 23:00:03.015256 3557 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 23:00:03.180515 systemd[1]: Created slice kubepods-burstable-pod722678b5_99d5_4055_8f3f_165568ddd9d9.slice - libcontainer container kubepods-burstable-pod722678b5_99d5_4055_8f3f_165568ddd9d9.slice. Nov 23 23:00:03.211507 kubelet[3557]: I1123 23:00:03.210865 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f39c3f69-d875-40c8-b0f9-149e3b427959-whisker-ca-bundle\") pod \"whisker-59584fc78f-744nh\" (UID: \"f39c3f69-d875-40c8-b0f9-149e3b427959\") " pod="calico-system/whisker-59584fc78f-744nh" Nov 23 23:00:03.211507 kubelet[3557]: I1123 23:00:03.210938 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx55v\" (UniqueName: \"kubernetes.io/projected/722678b5-99d5-4055-8f3f-165568ddd9d9-kube-api-access-sx55v\") pod \"coredns-674b8bbfcf-nrc5l\" (UID: \"722678b5-99d5-4055-8f3f-165568ddd9d9\") " pod="kube-system/coredns-674b8bbfcf-nrc5l" Nov 23 23:00:03.211507 kubelet[3557]: I1123 23:00:03.211038 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgkc\" (UniqueName: \"kubernetes.io/projected/f39c3f69-d875-40c8-b0f9-149e3b427959-kube-api-access-7xgkc\") pod \"whisker-59584fc78f-744nh\" (UID: \"f39c3f69-d875-40c8-b0f9-149e3b427959\") " pod="calico-system/whisker-59584fc78f-744nh" Nov 23 23:00:03.211507 kubelet[3557]: I1123 23:00:03.211093 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f39c3f69-d875-40c8-b0f9-149e3b427959-whisker-backend-key-pair\") pod \"whisker-59584fc78f-744nh\" (UID: \"f39c3f69-d875-40c8-b0f9-149e3b427959\") " pod="calico-system/whisker-59584fc78f-744nh" Nov 23 23:00:03.211507 kubelet[3557]: I1123 23:00:03.211134 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/722678b5-99d5-4055-8f3f-165568ddd9d9-config-volume\") pod \"coredns-674b8bbfcf-nrc5l\" (UID: \"722678b5-99d5-4055-8f3f-165568ddd9d9\") " pod="kube-system/coredns-674b8bbfcf-nrc5l" Nov 23 23:00:03.231468 systemd[1]: Created slice kubepods-besteffort-podf39c3f69_d875_40c8_b0f9_149e3b427959.slice - libcontainer container kubepods-besteffort-podf39c3f69_d875_40c8_b0f9_149e3b427959.slice. Nov 23 23:00:03.279686 systemd[1]: Created slice kubepods-besteffort-pod5cc67e78_541e_4794_9086_b55b57263fd2.slice - libcontainer container kubepods-besteffort-pod5cc67e78_541e_4794_9086_b55b57263fd2.slice. Nov 23 23:00:03.311899 kubelet[3557]: I1123 23:00:03.311391 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-876ds\" (UniqueName: \"kubernetes.io/projected/5cc67e78-541e-4794-9086-b55b57263fd2-kube-api-access-876ds\") pod \"calico-kube-controllers-856dd64f49-b8qsw\" (UID: \"5cc67e78-541e-4794-9086-b55b57263fd2\") " pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" Nov 23 23:00:03.321126 kubelet[3557]: I1123 23:00:03.314439 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/58da0435-c510-4733-869f-85a4fe15eaf3-goldmane-ca-bundle\") pod \"goldmane-666569f655-7qjg8\" (UID: \"58da0435-c510-4733-869f-85a4fe15eaf3\") " pod="calico-system/goldmane-666569f655-7qjg8" Nov 23 23:00:03.321126 kubelet[3557]: I1123 23:00:03.314546 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffwh2\" (UniqueName: \"kubernetes.io/projected/58da0435-c510-4733-869f-85a4fe15eaf3-kube-api-access-ffwh2\") pod \"goldmane-666569f655-7qjg8\" (UID: \"58da0435-c510-4733-869f-85a4fe15eaf3\") " pod="calico-system/goldmane-666569f655-7qjg8" Nov 23 23:00:03.321126 kubelet[3557]: I1123 23:00:03.314635 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/58da0435-c510-4733-869f-85a4fe15eaf3-goldmane-key-pair\") pod \"goldmane-666569f655-7qjg8\" (UID: \"58da0435-c510-4733-869f-85a4fe15eaf3\") " pod="calico-system/goldmane-666569f655-7qjg8" Nov 23 23:00:03.321126 kubelet[3557]: I1123 23:00:03.314698 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58da0435-c510-4733-869f-85a4fe15eaf3-config\") pod \"goldmane-666569f655-7qjg8\" (UID: \"58da0435-c510-4733-869f-85a4fe15eaf3\") " pod="calico-system/goldmane-666569f655-7qjg8" Nov 23 23:00:03.321126 kubelet[3557]: I1123 23:00:03.316816 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cc67e78-541e-4794-9086-b55b57263fd2-tigera-ca-bundle\") pod \"calico-kube-controllers-856dd64f49-b8qsw\" (UID: \"5cc67e78-541e-4794-9086-b55b57263fd2\") " pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" Nov 23 23:00:03.371561 systemd[1]: Created slice kubepods-besteffort-pod58da0435_c510_4733_869f_85a4fe15eaf3.slice - libcontainer container kubepods-besteffort-pod58da0435_c510_4733_869f_85a4fe15eaf3.slice. Nov 23 23:00:03.392948 systemd[1]: Created slice kubepods-besteffort-poda3a85933_215d_434f_beb8_3b039c057228.slice - libcontainer container kubepods-besteffort-poda3a85933_215d_434f_beb8_3b039c057228.slice. Nov 23 23:00:03.426562 kubelet[3557]: I1123 23:00:03.425990 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a3a85933-215d-434f-beb8-3b039c057228-calico-apiserver-certs\") pod \"calico-apiserver-7fcd9fb754-gfvhg\" (UID: \"a3a85933-215d-434f-beb8-3b039c057228\") " pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" Nov 23 23:00:03.426562 kubelet[3557]: I1123 23:00:03.426074 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p89f2\" (UniqueName: \"kubernetes.io/projected/a3a85933-215d-434f-beb8-3b039c057228-kube-api-access-p89f2\") pod \"calico-apiserver-7fcd9fb754-gfvhg\" (UID: \"a3a85933-215d-434f-beb8-3b039c057228\") " pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" Nov 23 23:00:03.503854 containerd[2015]: time="2025-11-23T23:00:03.503784254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrc5l,Uid:722678b5-99d5-4055-8f3f-165568ddd9d9,Namespace:kube-system,Attempt:0,}" Nov 23 23:00:03.529917 systemd[1]: Created slice kubepods-besteffort-pod82cd0774_54f5_4b66_8a2d_bd758439764f.slice - libcontainer container kubepods-besteffort-pod82cd0774_54f5_4b66_8a2d_bd758439764f.slice. Nov 23 23:00:03.542024 kubelet[3557]: I1123 23:00:03.538220 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/82cd0774-54f5-4b66-8a2d-bd758439764f-calico-apiserver-certs\") pod \"calico-apiserver-7fcd9fb754-2c5lm\" (UID: \"82cd0774-54f5-4b66-8a2d-bd758439764f\") " pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" Nov 23 23:00:03.542024 kubelet[3557]: I1123 23:00:03.538284 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkbqn\" (UniqueName: \"kubernetes.io/projected/82cd0774-54f5-4b66-8a2d-bd758439764f-kube-api-access-pkbqn\") pod \"calico-apiserver-7fcd9fb754-2c5lm\" (UID: \"82cd0774-54f5-4b66-8a2d-bd758439764f\") " pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" Nov 23 23:00:03.554320 systemd[1]: Created slice kubepods-besteffort-pod1265050f_2f3c_4c9a_a19e_43d1823e072d.slice - libcontainer container kubepods-besteffort-pod1265050f_2f3c_4c9a_a19e_43d1823e072d.slice. Nov 23 23:00:03.566935 containerd[2015]: time="2025-11-23T23:00:03.565688043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59584fc78f-744nh,Uid:f39c3f69-d875-40c8-b0f9-149e3b427959,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:03.590851 systemd[1]: Created slice kubepods-burstable-podc32bb835_766b_4882_947e_95ccedc2df07.slice - libcontainer container kubepods-burstable-podc32bb835_766b_4882_947e_95ccedc2df07.slice. Nov 23 23:00:03.616160 containerd[2015]: time="2025-11-23T23:00:03.616079026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856dd64f49-b8qsw,Uid:5cc67e78-541e-4794-9086-b55b57263fd2,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:03.639984 kubelet[3557]: I1123 23:00:03.639305 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmn9z\" (UniqueName: \"kubernetes.io/projected/c32bb835-766b-4882-947e-95ccedc2df07-kube-api-access-tmn9z\") pod \"coredns-674b8bbfcf-2244g\" (UID: \"c32bb835-766b-4882-947e-95ccedc2df07\") " pod="kube-system/coredns-674b8bbfcf-2244g" Nov 23 23:00:03.639984 kubelet[3557]: I1123 23:00:03.639387 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32bb835-766b-4882-947e-95ccedc2df07-config-volume\") pod \"coredns-674b8bbfcf-2244g\" (UID: \"c32bb835-766b-4882-947e-95ccedc2df07\") " pod="kube-system/coredns-674b8bbfcf-2244g" Nov 23 23:00:03.639984 kubelet[3557]: I1123 23:00:03.639437 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1265050f-2f3c-4c9a-a19e-43d1823e072d-calico-apiserver-certs\") pod \"calico-apiserver-6574bf4f5d-qz2dt\" (UID: \"1265050f-2f3c-4c9a-a19e-43d1823e072d\") " pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" Nov 23 23:00:03.639984 kubelet[3557]: I1123 23:00:03.639483 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqcn8\" (UniqueName: \"kubernetes.io/projected/1265050f-2f3c-4c9a-a19e-43d1823e072d-kube-api-access-jqcn8\") pod \"calico-apiserver-6574bf4f5d-qz2dt\" (UID: \"1265050f-2f3c-4c9a-a19e-43d1823e072d\") " pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" Nov 23 23:00:03.699106 systemd[1]: Created slice kubepods-besteffort-pod39270bf4_b6a6_4d62_8a14_a5e6fd018861.slice - libcontainer container kubepods-besteffort-pod39270bf4_b6a6_4d62_8a14_a5e6fd018861.slice. Nov 23 23:00:03.709189 containerd[2015]: time="2025-11-23T23:00:03.709096208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7qjg8,Uid:58da0435-c510-4733-869f-85a4fe15eaf3,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:03.709573 containerd[2015]: time="2025-11-23T23:00:03.709502707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-gfvhg,Uid:a3a85933-215d-434f-beb8-3b039c057228,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:00:03.714347 containerd[2015]: time="2025-11-23T23:00:03.714264515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xflhj,Uid:39270bf4-b6a6-4d62-8a14-a5e6fd018861,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:03.864610 containerd[2015]: time="2025-11-23T23:00:03.864218892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-2c5lm,Uid:82cd0774-54f5-4b66-8a2d-bd758439764f,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:00:03.879656 containerd[2015]: time="2025-11-23T23:00:03.879310746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6574bf4f5d-qz2dt,Uid:1265050f-2f3c-4c9a-a19e-43d1823e072d,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:00:03.923318 containerd[2015]: time="2025-11-23T23:00:03.923238347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2244g,Uid:c32bb835-766b-4882-947e-95ccedc2df07,Namespace:kube-system,Attempt:0,}" Nov 23 23:00:04.042785 containerd[2015]: time="2025-11-23T23:00:04.042654406Z" level=error msg="Failed to destroy network for sandbox \"4a53cbd02b0214950f38b3db6039338f7212ca749b9f5b205ebb8e94fb1399e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.051508 systemd[1]: run-netns-cni\x2d51951fea\x2d7857\x2d1755\x2da186\x2d52e5b0fdaf4a.mount: Deactivated successfully. Nov 23 23:00:04.058360 containerd[2015]: time="2025-11-23T23:00:04.055923809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrc5l,Uid:722678b5-99d5-4055-8f3f-165568ddd9d9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a53cbd02b0214950f38b3db6039338f7212ca749b9f5b205ebb8e94fb1399e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.063775 kubelet[3557]: E1123 23:00:04.063685 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a53cbd02b0214950f38b3db6039338f7212ca749b9f5b205ebb8e94fb1399e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.065474 kubelet[3557]: E1123 23:00:04.064533 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a53cbd02b0214950f38b3db6039338f7212ca749b9f5b205ebb8e94fb1399e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nrc5l" Nov 23 23:00:04.065474 kubelet[3557]: E1123 23:00:04.064582 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a53cbd02b0214950f38b3db6039338f7212ca749b9f5b205ebb8e94fb1399e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nrc5l" Nov 23 23:00:04.065474 kubelet[3557]: E1123 23:00:04.064682 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nrc5l_kube-system(722678b5-99d5-4055-8f3f-165568ddd9d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nrc5l_kube-system(722678b5-99d5-4055-8f3f-165568ddd9d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a53cbd02b0214950f38b3db6039338f7212ca749b9f5b205ebb8e94fb1399e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nrc5l" podUID="722678b5-99d5-4055-8f3f-165568ddd9d9" Nov 23 23:00:04.078756 containerd[2015]: time="2025-11-23T23:00:04.077946535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:00:04.388193 containerd[2015]: time="2025-11-23T23:00:04.388013974Z" level=error msg="Failed to destroy network for sandbox \"cc5b13bdc99f8b79e53f3c390e58f563ba23e890fccf84fcf584509f454e43d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.394647 systemd[1]: run-netns-cni\x2de2d8d7b4\x2d27db\x2d1e96\x2d31f8\x2d984ae7f4ca0b.mount: Deactivated successfully. Nov 23 23:00:04.397336 containerd[2015]: time="2025-11-23T23:00:04.396710316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856dd64f49-b8qsw,Uid:5cc67e78-541e-4794-9086-b55b57263fd2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5b13bdc99f8b79e53f3c390e58f563ba23e890fccf84fcf584509f454e43d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.398494 kubelet[3557]: E1123 23:00:04.398031 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5b13bdc99f8b79e53f3c390e58f563ba23e890fccf84fcf584509f454e43d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.398494 kubelet[3557]: E1123 23:00:04.398115 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5b13bdc99f8b79e53f3c390e58f563ba23e890fccf84fcf584509f454e43d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" Nov 23 23:00:04.398494 kubelet[3557]: E1123 23:00:04.398160 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc5b13bdc99f8b79e53f3c390e58f563ba23e890fccf84fcf584509f454e43d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" Nov 23 23:00:04.398823 kubelet[3557]: E1123 23:00:04.398247 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-856dd64f49-b8qsw_calico-system(5cc67e78-541e-4794-9086-b55b57263fd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-856dd64f49-b8qsw_calico-system(5cc67e78-541e-4794-9086-b55b57263fd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc5b13bdc99f8b79e53f3c390e58f563ba23e890fccf84fcf584509f454e43d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:00:04.474521 containerd[2015]: time="2025-11-23T23:00:04.474426115Z" level=error msg="Failed to destroy network for sandbox \"40de3ef27fde4e2dde6e222fa5256cf42bd4a364396affa69d432be90f8956ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.480372 systemd[1]: run-netns-cni\x2d9b55f47e\x2d23bb\x2dbc38\x2d5df3\x2df68cc2b48b85.mount: Deactivated successfully. Nov 23 23:00:04.488116 containerd[2015]: time="2025-11-23T23:00:04.488007374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59584fc78f-744nh,Uid:f39c3f69-d875-40c8-b0f9-149e3b427959,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40de3ef27fde4e2dde6e222fa5256cf42bd4a364396affa69d432be90f8956ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.488326 containerd[2015]: time="2025-11-23T23:00:04.488232859Z" level=error msg="Failed to destroy network for sandbox \"833e63a135335ff3b1f5d4076b527707480ce079985f88053cd0bbdb9ca40687\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.492434 kubelet[3557]: E1123 23:00:04.491617 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40de3ef27fde4e2dde6e222fa5256cf42bd4a364396affa69d432be90f8956ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.492434 kubelet[3557]: E1123 23:00:04.491708 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40de3ef27fde4e2dde6e222fa5256cf42bd4a364396affa69d432be90f8956ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59584fc78f-744nh" Nov 23 23:00:04.492434 kubelet[3557]: E1123 23:00:04.491771 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40de3ef27fde4e2dde6e222fa5256cf42bd4a364396affa69d432be90f8956ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59584fc78f-744nh" Nov 23 23:00:04.494989 kubelet[3557]: E1123 23:00:04.491958 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59584fc78f-744nh_calico-system(f39c3f69-d875-40c8-b0f9-149e3b427959)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59584fc78f-744nh_calico-system(f39c3f69-d875-40c8-b0f9-149e3b427959)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40de3ef27fde4e2dde6e222fa5256cf42bd4a364396affa69d432be90f8956ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59584fc78f-744nh" podUID="f39c3f69-d875-40c8-b0f9-149e3b427959" Nov 23 23:00:04.497080 systemd[1]: run-netns-cni\x2d2fc3b23e\x2d1ece\x2d8fb1\x2d8846\x2d3f9321fb6803.mount: Deactivated successfully. Nov 23 23:00:04.502536 containerd[2015]: time="2025-11-23T23:00:04.502238518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xflhj,Uid:39270bf4-b6a6-4d62-8a14-a5e6fd018861,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"833e63a135335ff3b1f5d4076b527707480ce079985f88053cd0bbdb9ca40687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.505430 kubelet[3557]: E1123 23:00:04.503809 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"833e63a135335ff3b1f5d4076b527707480ce079985f88053cd0bbdb9ca40687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.505430 kubelet[3557]: E1123 23:00:04.503890 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"833e63a135335ff3b1f5d4076b527707480ce079985f88053cd0bbdb9ca40687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xflhj" Nov 23 23:00:04.505430 kubelet[3557]: E1123 23:00:04.503925 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"833e63a135335ff3b1f5d4076b527707480ce079985f88053cd0bbdb9ca40687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xflhj" Nov 23 23:00:04.505748 kubelet[3557]: E1123 23:00:04.504001 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"833e63a135335ff3b1f5d4076b527707480ce079985f88053cd0bbdb9ca40687\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:00:04.538866 containerd[2015]: time="2025-11-23T23:00:04.538793955Z" level=error msg="Failed to destroy network for sandbox \"683bc6d3c676dc14a88d2dcd5217c1b42635f7d369d45212dc5e2050eb8c4006\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.543601 containerd[2015]: time="2025-11-23T23:00:04.543043731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6574bf4f5d-qz2dt,Uid:1265050f-2f3c-4c9a-a19e-43d1823e072d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"683bc6d3c676dc14a88d2dcd5217c1b42635f7d369d45212dc5e2050eb8c4006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.543850 kubelet[3557]: E1123 23:00:04.543425 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"683bc6d3c676dc14a88d2dcd5217c1b42635f7d369d45212dc5e2050eb8c4006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.543850 kubelet[3557]: E1123 23:00:04.543502 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"683bc6d3c676dc14a88d2dcd5217c1b42635f7d369d45212dc5e2050eb8c4006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" Nov 23 23:00:04.543850 kubelet[3557]: E1123 23:00:04.543535 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"683bc6d3c676dc14a88d2dcd5217c1b42635f7d369d45212dc5e2050eb8c4006\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" Nov 23 23:00:04.546903 kubelet[3557]: E1123 23:00:04.543954 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6574bf4f5d-qz2dt_calico-apiserver(1265050f-2f3c-4c9a-a19e-43d1823e072d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6574bf4f5d-qz2dt_calico-apiserver(1265050f-2f3c-4c9a-a19e-43d1823e072d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"683bc6d3c676dc14a88d2dcd5217c1b42635f7d369d45212dc5e2050eb8c4006\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:00:04.547531 containerd[2015]: time="2025-11-23T23:00:04.547221807Z" level=error msg="Failed to destroy network for sandbox \"648fd0fda1b4556426b38a43f099bb3e04874d3e3817316145fdd07beec75323\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.550984 containerd[2015]: time="2025-11-23T23:00:04.550894286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-gfvhg,Uid:a3a85933-215d-434f-beb8-3b039c057228,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"648fd0fda1b4556426b38a43f099bb3e04874d3e3817316145fdd07beec75323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.553764 kubelet[3557]: E1123 23:00:04.551630 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"648fd0fda1b4556426b38a43f099bb3e04874d3e3817316145fdd07beec75323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.556893 kubelet[3557]: E1123 23:00:04.555459 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"648fd0fda1b4556426b38a43f099bb3e04874d3e3817316145fdd07beec75323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" Nov 23 23:00:04.556893 kubelet[3557]: E1123 23:00:04.555525 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"648fd0fda1b4556426b38a43f099bb3e04874d3e3817316145fdd07beec75323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" Nov 23 23:00:04.556893 kubelet[3557]: E1123 23:00:04.556803 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd9fb754-gfvhg_calico-apiserver(a3a85933-215d-434f-beb8-3b039c057228)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd9fb754-gfvhg_calico-apiserver(a3a85933-215d-434f-beb8-3b039c057228)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"648fd0fda1b4556426b38a43f099bb3e04874d3e3817316145fdd07beec75323\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:00:04.564405 containerd[2015]: time="2025-11-23T23:00:04.564328448Z" level=error msg="Failed to destroy network for sandbox \"2b298b99d7375fe14cd0b4d1edeadfe2e8687d813df266837bf11c9b058b7859\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.568252 containerd[2015]: time="2025-11-23T23:00:04.568021842Z" level=error msg="Failed to destroy network for sandbox \"b976d8ed3a8a82959f10b484e21e98d85301e146d633eccb8cb4f8b7de26b149\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.568637 containerd[2015]: time="2025-11-23T23:00:04.568561464Z" level=error msg="Failed to destroy network for sandbox \"78abf0b74101b20e92a137aeeb0cbdcd119caa97f44ed23a9d854c97d4f4d616\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.569715 containerd[2015]: time="2025-11-23T23:00:04.569644969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7qjg8,Uid:58da0435-c510-4733-869f-85a4fe15eaf3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b298b99d7375fe14cd0b4d1edeadfe2e8687d813df266837bf11c9b058b7859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.570392 kubelet[3557]: E1123 23:00:04.570199 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b298b99d7375fe14cd0b4d1edeadfe2e8687d813df266837bf11c9b058b7859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.570392 kubelet[3557]: E1123 23:00:04.570282 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b298b99d7375fe14cd0b4d1edeadfe2e8687d813df266837bf11c9b058b7859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7qjg8" Nov 23 23:00:04.570392 kubelet[3557]: E1123 23:00:04.570319 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b298b99d7375fe14cd0b4d1edeadfe2e8687d813df266837bf11c9b058b7859\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7qjg8" Nov 23 23:00:04.570681 kubelet[3557]: E1123 23:00:04.570638 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7qjg8_calico-system(58da0435-c510-4733-869f-85a4fe15eaf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7qjg8_calico-system(58da0435-c510-4733-869f-85a4fe15eaf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b298b99d7375fe14cd0b4d1edeadfe2e8687d813df266837bf11c9b058b7859\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:00:04.573653 containerd[2015]: time="2025-11-23T23:00:04.573343442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2244g,Uid:c32bb835-766b-4882-947e-95ccedc2df07,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b976d8ed3a8a82959f10b484e21e98d85301e146d633eccb8cb4f8b7de26b149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.574597 kubelet[3557]: E1123 23:00:04.574317 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b976d8ed3a8a82959f10b484e21e98d85301e146d633eccb8cb4f8b7de26b149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.574597 kubelet[3557]: E1123 23:00:04.574399 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b976d8ed3a8a82959f10b484e21e98d85301e146d633eccb8cb4f8b7de26b149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2244g" Nov 23 23:00:04.574597 kubelet[3557]: E1123 23:00:04.574432 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b976d8ed3a8a82959f10b484e21e98d85301e146d633eccb8cb4f8b7de26b149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2244g" Nov 23 23:00:04.575357 kubelet[3557]: E1123 23:00:04.574520 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2244g_kube-system(c32bb835-766b-4882-947e-95ccedc2df07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2244g_kube-system(c32bb835-766b-4882-947e-95ccedc2df07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b976d8ed3a8a82959f10b484e21e98d85301e146d633eccb8cb4f8b7de26b149\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2244g" podUID="c32bb835-766b-4882-947e-95ccedc2df07" Nov 23 23:00:04.575655 containerd[2015]: time="2025-11-23T23:00:04.575582669Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-2c5lm,Uid:82cd0774-54f5-4b66-8a2d-bd758439764f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78abf0b74101b20e92a137aeeb0cbdcd119caa97f44ed23a9d854c97d4f4d616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.576811 kubelet[3557]: E1123 23:00:04.576420 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78abf0b74101b20e92a137aeeb0cbdcd119caa97f44ed23a9d854c97d4f4d616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:04.576961 kubelet[3557]: E1123 23:00:04.576835 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78abf0b74101b20e92a137aeeb0cbdcd119caa97f44ed23a9d854c97d4f4d616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" Nov 23 23:00:04.576961 kubelet[3557]: E1123 23:00:04.576881 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78abf0b74101b20e92a137aeeb0cbdcd119caa97f44ed23a9d854c97d4f4d616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" Nov 23 23:00:04.577074 kubelet[3557]: E1123 23:00:04.576979 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd9fb754-2c5lm_calico-apiserver(82cd0774-54f5-4b66-8a2d-bd758439764f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd9fb754-2c5lm_calico-apiserver(82cd0774-54f5-4b66-8a2d-bd758439764f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78abf0b74101b20e92a137aeeb0cbdcd119caa97f44ed23a9d854c97d4f4d616\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:00:04.985272 systemd[1]: run-netns-cni\x2d30e96459\x2d2f15\x2d9bc2\x2dbc2e\x2d254877ee26b0.mount: Deactivated successfully. Nov 23 23:00:04.985447 systemd[1]: run-netns-cni\x2de8677941\x2dfb43\x2d0242\x2dcf26\x2d715914935ed8.mount: Deactivated successfully. Nov 23 23:00:04.985567 systemd[1]: run-netns-cni\x2d761a4d69\x2d12ef\x2dd797\x2df4eb\x2d39c54046c53e.mount: Deactivated successfully. Nov 23 23:00:04.985684 systemd[1]: run-netns-cni\x2d66db8233\x2d3cc6\x2da983\x2d0d9c\x2dd8f274956b99.mount: Deactivated successfully. Nov 23 23:00:04.986264 systemd[1]: run-netns-cni\x2daeee2601\x2d85cb\x2dbbb4\x2d0710\x2dbbfb0d4268de.mount: Deactivated successfully. Nov 23 23:00:14.683141 containerd[2015]: time="2025-11-23T23:00:14.682777987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xflhj,Uid:39270bf4-b6a6-4d62-8a14-a5e6fd018861,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:15.076431 containerd[2015]: time="2025-11-23T23:00:15.076358177Z" level=error msg="Failed to destroy network for sandbox \"dee95a8bebb9413dc52430b435d9feaf384ab90aa174ba451bca4e51d39737b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:15.083991 systemd[1]: run-netns-cni\x2dd03cb453\x2dc89b\x2d5902\x2ddd85\x2da92cf99cd724.mount: Deactivated successfully. Nov 23 23:00:15.086904 containerd[2015]: time="2025-11-23T23:00:15.086606175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xflhj,Uid:39270bf4-b6a6-4d62-8a14-a5e6fd018861,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee95a8bebb9413dc52430b435d9feaf384ab90aa174ba451bca4e51d39737b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:15.087779 kubelet[3557]: E1123 23:00:15.087667 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee95a8bebb9413dc52430b435d9feaf384ab90aa174ba451bca4e51d39737b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:15.088904 kubelet[3557]: E1123 23:00:15.087779 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee95a8bebb9413dc52430b435d9feaf384ab90aa174ba451bca4e51d39737b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xflhj" Nov 23 23:00:15.088904 kubelet[3557]: E1123 23:00:15.087816 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dee95a8bebb9413dc52430b435d9feaf384ab90aa174ba451bca4e51d39737b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xflhj" Nov 23 23:00:15.088904 kubelet[3557]: E1123 23:00:15.087947 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dee95a8bebb9413dc52430b435d9feaf384ab90aa174ba451bca4e51d39737b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:00:15.676439 containerd[2015]: time="2025-11-23T23:00:15.676359292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-2c5lm,Uid:82cd0774-54f5-4b66-8a2d-bd758439764f,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:00:15.678843 containerd[2015]: time="2025-11-23T23:00:15.677719848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6574bf4f5d-qz2dt,Uid:1265050f-2f3c-4c9a-a19e-43d1823e072d,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:00:15.679832 containerd[2015]: time="2025-11-23T23:00:15.679745693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59584fc78f-744nh,Uid:f39c3f69-d875-40c8-b0f9-149e3b427959,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:15.853659 containerd[2015]: time="2025-11-23T23:00:15.853337338Z" level=error msg="Failed to destroy network for sandbox \"9fd9f08cd6d004bb091328fcbcef70e9cc1435b74fd2b76e1df683f367f8cd81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:15.858952 containerd[2015]: time="2025-11-23T23:00:15.857063581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6574bf4f5d-qz2dt,Uid:1265050f-2f3c-4c9a-a19e-43d1823e072d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd9f08cd6d004bb091328fcbcef70e9cc1435b74fd2b76e1df683f367f8cd81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:15.859296 kubelet[3557]: E1123 23:00:15.858921 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd9f08cd6d004bb091328fcbcef70e9cc1435b74fd2b76e1df683f367f8cd81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:15.859296 kubelet[3557]: E1123 23:00:15.859013 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd9f08cd6d004bb091328fcbcef70e9cc1435b74fd2b76e1df683f367f8cd81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" Nov 23 23:00:15.859296 kubelet[3557]: E1123 23:00:15.859049 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd9f08cd6d004bb091328fcbcef70e9cc1435b74fd2b76e1df683f367f8cd81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" Nov 23 23:00:15.861116 kubelet[3557]: E1123 23:00:15.859145 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6574bf4f5d-qz2dt_calico-apiserver(1265050f-2f3c-4c9a-a19e-43d1823e072d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6574bf4f5d-qz2dt_calico-apiserver(1265050f-2f3c-4c9a-a19e-43d1823e072d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fd9f08cd6d004bb091328fcbcef70e9cc1435b74fd2b76e1df683f367f8cd81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:00:15.863110 systemd[1]: run-netns-cni\x2dc4975dfa\x2d46ff\x2d2730\x2d869a\x2d505af10002ea.mount: Deactivated successfully. Nov 23 23:00:15.957399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995472651.mount: Deactivated successfully. Nov 23 23:00:15.983846 containerd[2015]: time="2025-11-23T23:00:15.983779144Z" level=error msg="Failed to destroy network for sandbox \"903270a541d07573f1603e92c95351957bdd90a717ef21a2b4220d4a89571f89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:15.990474 systemd[1]: run-netns-cni\x2dd46b2b17\x2dbfc0\x2d59c9\x2da00a\x2d89f154d12cf5.mount: Deactivated successfully. Nov 23 23:00:16.001310 containerd[2015]: time="2025-11-23T23:00:15.998482928Z" level=error msg="Failed to destroy network for sandbox \"e38a254714a85778613c23df324f23231d05ea4c573084a960f1728ff032986e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.003029 systemd[1]: run-netns-cni\x2d917faf99\x2d19e4\x2dcb4d\x2d74c7\x2d3c11454be3b3.mount: Deactivated successfully. Nov 23 23:00:16.040197 containerd[2015]: time="2025-11-23T23:00:16.040117191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-2c5lm,Uid:82cd0774-54f5-4b66-8a2d-bd758439764f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"903270a541d07573f1603e92c95351957bdd90a717ef21a2b4220d4a89571f89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.040543 kubelet[3557]: E1123 23:00:16.040463 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903270a541d07573f1603e92c95351957bdd90a717ef21a2b4220d4a89571f89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.040672 kubelet[3557]: E1123 23:00:16.040555 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903270a541d07573f1603e92c95351957bdd90a717ef21a2b4220d4a89571f89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" Nov 23 23:00:16.040672 kubelet[3557]: E1123 23:00:16.040591 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"903270a541d07573f1603e92c95351957bdd90a717ef21a2b4220d4a89571f89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" Nov 23 23:00:16.040833 kubelet[3557]: E1123 23:00:16.040676 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd9fb754-2c5lm_calico-apiserver(82cd0774-54f5-4b66-8a2d-bd758439764f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd9fb754-2c5lm_calico-apiserver(82cd0774-54f5-4b66-8a2d-bd758439764f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"903270a541d07573f1603e92c95351957bdd90a717ef21a2b4220d4a89571f89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:00:16.042877 containerd[2015]: time="2025-11-23T23:00:16.042654251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59584fc78f-744nh,Uid:f39c3f69-d875-40c8-b0f9-149e3b427959,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e38a254714a85778613c23df324f23231d05ea4c573084a960f1728ff032986e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.043252 kubelet[3557]: E1123 23:00:16.043195 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e38a254714a85778613c23df324f23231d05ea4c573084a960f1728ff032986e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.043332 kubelet[3557]: E1123 23:00:16.043291 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e38a254714a85778613c23df324f23231d05ea4c573084a960f1728ff032986e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59584fc78f-744nh" Nov 23 23:00:16.043441 kubelet[3557]: E1123 23:00:16.043328 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e38a254714a85778613c23df324f23231d05ea4c573084a960f1728ff032986e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59584fc78f-744nh" Nov 23 23:00:16.043441 kubelet[3557]: E1123 23:00:16.043412 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59584fc78f-744nh_calico-system(f39c3f69-d875-40c8-b0f9-149e3b427959)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59584fc78f-744nh_calico-system(f39c3f69-d875-40c8-b0f9-149e3b427959)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e38a254714a85778613c23df324f23231d05ea4c573084a960f1728ff032986e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59584fc78f-744nh" podUID="f39c3f69-d875-40c8-b0f9-149e3b427959" Nov 23 23:00:16.053924 containerd[2015]: time="2025-11-23T23:00:16.052928469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:00:16.055970 containerd[2015]: time="2025-11-23T23:00:16.055908479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:00:16.058535 containerd[2015]: time="2025-11-23T23:00:16.058492878Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:00:16.062922 containerd[2015]: time="2025-11-23T23:00:16.062870074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:00:16.064210 containerd[2015]: time="2025-11-23T23:00:16.064142806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 11.986107942s" Nov 23 23:00:16.064210 containerd[2015]: time="2025-11-23T23:00:16.064203904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:00:16.088219 containerd[2015]: time="2025-11-23T23:00:16.088039236Z" level=info msg="CreateContainer within sandbox \"44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:00:16.106555 containerd[2015]: time="2025-11-23T23:00:16.106505497Z" level=info msg="Container 40acbe245015dfbf4df0a3ae10709b1fb4981ef16a11d3620ab9d6d55e68dcb4: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:16.126801 containerd[2015]: time="2025-11-23T23:00:16.126323729Z" level=info msg="CreateContainer within sandbox \"44025f6a87df8177e507ae3a2d6968395ea9ed8842096f4a4edcdd3d13b844a0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"40acbe245015dfbf4df0a3ae10709b1fb4981ef16a11d3620ab9d6d55e68dcb4\"" Nov 23 23:00:16.130668 containerd[2015]: time="2025-11-23T23:00:16.130474672Z" level=info msg="StartContainer for \"40acbe245015dfbf4df0a3ae10709b1fb4981ef16a11d3620ab9d6d55e68dcb4\"" Nov 23 23:00:16.137628 containerd[2015]: time="2025-11-23T23:00:16.137538282Z" level=info msg="connecting to shim 40acbe245015dfbf4df0a3ae10709b1fb4981ef16a11d3620ab9d6d55e68dcb4" address="unix:///run/containerd/s/d88e2fe1716d8d49b0da1b4be45a6dbb888f08ff3f67c1fc05da9fdf3fceee57" protocol=ttrpc version=3 Nov 23 23:00:16.176412 systemd[1]: Started cri-containerd-40acbe245015dfbf4df0a3ae10709b1fb4981ef16a11d3620ab9d6d55e68dcb4.scope - libcontainer container 40acbe245015dfbf4df0a3ae10709b1fb4981ef16a11d3620ab9d6d55e68dcb4. Nov 23 23:00:16.308618 containerd[2015]: time="2025-11-23T23:00:16.308450776Z" level=info msg="StartContainer for \"40acbe245015dfbf4df0a3ae10709b1fb4981ef16a11d3620ab9d6d55e68dcb4\" returns successfully" Nov 23 23:00:16.671932 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:00:16.672080 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:00:16.673792 containerd[2015]: time="2025-11-23T23:00:16.673309373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2244g,Uid:c32bb835-766b-4882-947e-95ccedc2df07,Namespace:kube-system,Attempt:0,}" Nov 23 23:00:16.674273 containerd[2015]: time="2025-11-23T23:00:16.674224230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-gfvhg,Uid:a3a85933-215d-434f-beb8-3b039c057228,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:00:16.897968 containerd[2015]: time="2025-11-23T23:00:16.897833629Z" level=error msg="Failed to destroy network for sandbox \"168fc879bb8202a9902fa507deaf49d2895b33f7c7ef53724589ae9f3a1553cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.904388 containerd[2015]: time="2025-11-23T23:00:16.904285329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2244g,Uid:c32bb835-766b-4882-947e-95ccedc2df07,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"168fc879bb8202a9902fa507deaf49d2895b33f7c7ef53724589ae9f3a1553cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.905989 kubelet[3557]: E1123 23:00:16.905911 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168fc879bb8202a9902fa507deaf49d2895b33f7c7ef53724589ae9f3a1553cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.907170 systemd[1]: run-netns-cni\x2d97431468\x2d1155\x2d7e7f\x2d3e10\x2d08b2e6a47a62.mount: Deactivated successfully. Nov 23 23:00:16.909137 kubelet[3557]: E1123 23:00:16.908649 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168fc879bb8202a9902fa507deaf49d2895b33f7c7ef53724589ae9f3a1553cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2244g" Nov 23 23:00:16.909137 kubelet[3557]: E1123 23:00:16.908844 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"168fc879bb8202a9902fa507deaf49d2895b33f7c7ef53724589ae9f3a1553cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2244g" Nov 23 23:00:16.910979 kubelet[3557]: E1123 23:00:16.910493 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2244g_kube-system(c32bb835-766b-4882-947e-95ccedc2df07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2244g_kube-system(c32bb835-766b-4882-947e-95ccedc2df07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"168fc879bb8202a9902fa507deaf49d2895b33f7c7ef53724589ae9f3a1553cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2244g" podUID="c32bb835-766b-4882-947e-95ccedc2df07" Nov 23 23:00:16.925845 containerd[2015]: time="2025-11-23T23:00:16.925203023Z" level=error msg="Failed to destroy network for sandbox \"3ce5e4f33a0fc403dcc14f526360e0db49805f422ca5ed0ecbc0f27f283c157d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.928780 containerd[2015]: time="2025-11-23T23:00:16.928291122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-gfvhg,Uid:a3a85933-215d-434f-beb8-3b039c057228,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ce5e4f33a0fc403dcc14f526360e0db49805f422ca5ed0ecbc0f27f283c157d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.930231 kubelet[3557]: E1123 23:00:16.930106 3557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ce5e4f33a0fc403dcc14f526360e0db49805f422ca5ed0ecbc0f27f283c157d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:00:16.930667 kubelet[3557]: E1123 23:00:16.930568 3557 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ce5e4f33a0fc403dcc14f526360e0db49805f422ca5ed0ecbc0f27f283c157d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" Nov 23 23:00:16.931257 kubelet[3557]: E1123 23:00:16.930674 3557 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ce5e4f33a0fc403dcc14f526360e0db49805f422ca5ed0ecbc0f27f283c157d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" Nov 23 23:00:16.932816 kubelet[3557]: E1123 23:00:16.930843 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fcd9fb754-gfvhg_calico-apiserver(a3a85933-215d-434f-beb8-3b039c057228)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fcd9fb754-gfvhg_calico-apiserver(a3a85933-215d-434f-beb8-3b039c057228)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ce5e4f33a0fc403dcc14f526360e0db49805f422ca5ed0ecbc0f27f283c157d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:00:16.935838 systemd[1]: run-netns-cni\x2ddce40627\x2d3f89\x2da591\x2dcb09\x2d7f5ca85e0216.mount: Deactivated successfully. Nov 23 23:00:17.177980 kubelet[3557]: I1123 23:00:17.177700 3557 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f39c3f69-d875-40c8-b0f9-149e3b427959-whisker-ca-bundle\") pod \"f39c3f69-d875-40c8-b0f9-149e3b427959\" (UID: \"f39c3f69-d875-40c8-b0f9-149e3b427959\") " Nov 23 23:00:17.177980 kubelet[3557]: I1123 23:00:17.177808 3557 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f39c3f69-d875-40c8-b0f9-149e3b427959-whisker-backend-key-pair\") pod \"f39c3f69-d875-40c8-b0f9-149e3b427959\" (UID: \"f39c3f69-d875-40c8-b0f9-149e3b427959\") " Nov 23 23:00:17.177980 kubelet[3557]: I1123 23:00:17.177862 3557 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xgkc\" (UniqueName: \"kubernetes.io/projected/f39c3f69-d875-40c8-b0f9-149e3b427959-kube-api-access-7xgkc\") pod \"f39c3f69-d875-40c8-b0f9-149e3b427959\" (UID: \"f39c3f69-d875-40c8-b0f9-149e3b427959\") " Nov 23 23:00:17.180262 kubelet[3557]: I1123 23:00:17.180128 3557 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f39c3f69-d875-40c8-b0f9-149e3b427959-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f39c3f69-d875-40c8-b0f9-149e3b427959" (UID: "f39c3f69-d875-40c8-b0f9-149e3b427959"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:00:17.194200 kubelet[3557]: I1123 23:00:17.194127 3557 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39c3f69-d875-40c8-b0f9-149e3b427959-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f39c3f69-d875-40c8-b0f9-149e3b427959" (UID: "f39c3f69-d875-40c8-b0f9-149e3b427959"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:00:17.197818 kubelet[3557]: I1123 23:00:17.195461 3557 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f39c3f69-d875-40c8-b0f9-149e3b427959-kube-api-access-7xgkc" (OuterVolumeSpecName: "kube-api-access-7xgkc") pod "f39c3f69-d875-40c8-b0f9-149e3b427959" (UID: "f39c3f69-d875-40c8-b0f9-149e3b427959"). InnerVolumeSpecName "kube-api-access-7xgkc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:00:17.198817 systemd[1]: var-lib-kubelet-pods-f39c3f69\x2dd875\x2d40c8\x2db0f9\x2d149e3b427959-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7xgkc.mount: Deactivated successfully. Nov 23 23:00:17.200040 systemd[1]: var-lib-kubelet-pods-f39c3f69\x2dd875\x2d40c8\x2db0f9\x2d149e3b427959-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:00:17.278951 kubelet[3557]: I1123 23:00:17.278883 3557 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f39c3f69-d875-40c8-b0f9-149e3b427959-whisker-ca-bundle\") on node \"ip-172-31-24-27\" DevicePath \"\"" Nov 23 23:00:17.278951 kubelet[3557]: I1123 23:00:17.278945 3557 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f39c3f69-d875-40c8-b0f9-149e3b427959-whisker-backend-key-pair\") on node \"ip-172-31-24-27\" DevicePath \"\"" Nov 23 23:00:17.279212 kubelet[3557]: I1123 23:00:17.278971 3557 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7xgkc\" (UniqueName: \"kubernetes.io/projected/f39c3f69-d875-40c8-b0f9-149e3b427959-kube-api-access-7xgkc\") on node \"ip-172-31-24-27\" DevicePath \"\"" Nov 23 23:00:17.411968 systemd[1]: Removed slice kubepods-besteffort-podf39c3f69_d875_40c8_b0f9_149e3b427959.slice - libcontainer container kubepods-besteffort-podf39c3f69_d875_40c8_b0f9_149e3b427959.slice. Nov 23 23:00:17.467940 kubelet[3557]: I1123 23:00:17.467622 3557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cc75f" podStartSLOduration=2.482837882 podStartE2EDuration="23.467583369s" podCreationTimestamp="2025-11-23 22:59:54 +0000 UTC" firstStartedPulling="2025-11-23 22:59:55.081001957 +0000 UTC m=+31.803301523" lastFinishedPulling="2025-11-23 23:00:16.065747444 +0000 UTC m=+52.788047010" observedRunningTime="2025-11-23 23:00:17.158415913 +0000 UTC m=+53.880715587" watchObservedRunningTime="2025-11-23 23:00:17.467583369 +0000 UTC m=+54.189882935" Nov 23 23:00:17.562588 systemd[1]: Created slice kubepods-besteffort-pod3ebffe5d_c943_4ecd_a570_27b8df3681f4.slice - libcontainer container kubepods-besteffort-pod3ebffe5d_c943_4ecd_a570_27b8df3681f4.slice. Nov 23 23:00:17.675798 containerd[2015]: time="2025-11-23T23:00:17.674881884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856dd64f49-b8qsw,Uid:5cc67e78-541e-4794-9086-b55b57263fd2,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:17.684601 kubelet[3557]: I1123 23:00:17.684367 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ebffe5d-c943-4ecd-a570-27b8df3681f4-whisker-backend-key-pair\") pod \"whisker-585dccbd85-tdmw2\" (UID: \"3ebffe5d-c943-4ecd-a570-27b8df3681f4\") " pod="calico-system/whisker-585dccbd85-tdmw2" Nov 23 23:00:17.685016 kubelet[3557]: I1123 23:00:17.684982 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ebffe5d-c943-4ecd-a570-27b8df3681f4-whisker-ca-bundle\") pod \"whisker-585dccbd85-tdmw2\" (UID: \"3ebffe5d-c943-4ecd-a570-27b8df3681f4\") " pod="calico-system/whisker-585dccbd85-tdmw2" Nov 23 23:00:17.686594 kubelet[3557]: I1123 23:00:17.685243 3557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4fdh\" (UniqueName: \"kubernetes.io/projected/3ebffe5d-c943-4ecd-a570-27b8df3681f4-kube-api-access-n4fdh\") pod \"whisker-585dccbd85-tdmw2\" (UID: \"3ebffe5d-c943-4ecd-a570-27b8df3681f4\") " pod="calico-system/whisker-585dccbd85-tdmw2" Nov 23 23:00:17.686594 kubelet[3557]: I1123 23:00:17.686135 3557 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f39c3f69-d875-40c8-b0f9-149e3b427959" path="/var/lib/kubelet/pods/f39c3f69-d875-40c8-b0f9-149e3b427959/volumes" Nov 23 23:00:17.873842 containerd[2015]: time="2025-11-23T23:00:17.872934674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585dccbd85-tdmw2,Uid:3ebffe5d-c943-4ecd-a570-27b8df3681f4,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:18.133925 (udev-worker)[4709]: Network interface NamePolicy= disabled on kernel command line. Nov 23 23:00:18.144907 systemd-networkd[1821]: cali24d700ef1c1: Link UP Nov 23 23:00:18.151192 systemd-networkd[1821]: cali24d700ef1c1: Gained carrier Nov 23 23:00:18.205997 containerd[2015]: 2025-11-23 23:00:17.738 [INFO][4802] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:00:18.205997 containerd[2015]: 2025-11-23 23:00:17.842 [INFO][4802] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0 calico-kube-controllers-856dd64f49- calico-system 5cc67e78-541e-4794-9086-b55b57263fd2 898 0 2025-11-23 22:59:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:856dd64f49 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-27 calico-kube-controllers-856dd64f49-b8qsw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali24d700ef1c1 [] [] }} ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Namespace="calico-system" Pod="calico-kube-controllers-856dd64f49-b8qsw" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-" Nov 23 23:00:18.205997 containerd[2015]: 2025-11-23 23:00:17.842 [INFO][4802] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Namespace="calico-system" Pod="calico-kube-controllers-856dd64f49-b8qsw" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" Nov 23 23:00:18.205997 containerd[2015]: 2025-11-23 23:00:17.964 [INFO][4823] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" HandleID="k8s-pod-network.ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Workload="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" Nov 23 23:00:18.208368 containerd[2015]: 2025-11-23 23:00:17.965 [INFO][4823] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" HandleID="k8s-pod-network.ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Workload="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d850), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-27", "pod":"calico-kube-controllers-856dd64f49-b8qsw", "timestamp":"2025-11-23 23:00:17.964939582 +0000 UTC"}, Hostname:"ip-172-31-24-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:00:18.208368 containerd[2015]: 2025-11-23 23:00:17.965 [INFO][4823] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:00:18.208368 containerd[2015]: 2025-11-23 23:00:17.965 [INFO][4823] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:00:18.208368 containerd[2015]: 2025-11-23 23:00:17.965 [INFO][4823] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-27' Nov 23 23:00:18.208368 containerd[2015]: 2025-11-23 23:00:17.992 [INFO][4823] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" host="ip-172-31-24-27" Nov 23 23:00:18.208368 containerd[2015]: 2025-11-23 23:00:18.009 [INFO][4823] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-27" Nov 23 23:00:18.208368 containerd[2015]: 2025-11-23 23:00:18.029 [INFO][4823] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:18.208368 containerd[2015]: 2025-11-23 23:00:18.041 [INFO][4823] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:18.208368 containerd[2015]: 2025-11-23 23:00:18.047 [INFO][4823] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:18.210853 containerd[2015]: 2025-11-23 23:00:18.048 [INFO][4823] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" host="ip-172-31-24-27" Nov 23 23:00:18.210853 containerd[2015]: 2025-11-23 23:00:18.052 [INFO][4823] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c Nov 23 23:00:18.210853 containerd[2015]: 2025-11-23 23:00:18.062 [INFO][4823] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" host="ip-172-31-24-27" Nov 23 23:00:18.210853 containerd[2015]: 2025-11-23 23:00:18.080 [INFO][4823] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.193/26] block=192.168.122.192/26 handle="k8s-pod-network.ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" host="ip-172-31-24-27" Nov 23 23:00:18.210853 containerd[2015]: 2025-11-23 23:00:18.081 [INFO][4823] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.193/26] handle="k8s-pod-network.ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" host="ip-172-31-24-27" Nov 23 23:00:18.210853 containerd[2015]: 2025-11-23 23:00:18.082 [INFO][4823] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:00:18.210853 containerd[2015]: 2025-11-23 23:00:18.082 [INFO][4823] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.193/26] IPv6=[] ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" HandleID="k8s-pod-network.ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Workload="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" Nov 23 23:00:18.211389 containerd[2015]: 2025-11-23 23:00:18.097 [INFO][4802] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Namespace="calico-system" Pod="calico-kube-controllers-856dd64f49-b8qsw" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0", GenerateName:"calico-kube-controllers-856dd64f49-", Namespace:"calico-system", SelfLink:"", UID:"5cc67e78-541e-4794-9086-b55b57263fd2", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856dd64f49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"", Pod:"calico-kube-controllers-856dd64f49-b8qsw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali24d700ef1c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:18.211563 containerd[2015]: 2025-11-23 23:00:18.097 [INFO][4802] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.193/32] ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Namespace="calico-system" Pod="calico-kube-controllers-856dd64f49-b8qsw" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" Nov 23 23:00:18.211563 containerd[2015]: 2025-11-23 23:00:18.097 [INFO][4802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24d700ef1c1 ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Namespace="calico-system" Pod="calico-kube-controllers-856dd64f49-b8qsw" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" Nov 23 23:00:18.211563 containerd[2015]: 2025-11-23 23:00:18.158 [INFO][4802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Namespace="calico-system" Pod="calico-kube-controllers-856dd64f49-b8qsw" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" Nov 23 23:00:18.211709 containerd[2015]: 2025-11-23 23:00:18.161 [INFO][4802] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Namespace="calico-system" Pod="calico-kube-controllers-856dd64f49-b8qsw" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0", GenerateName:"calico-kube-controllers-856dd64f49-", Namespace:"calico-system", SelfLink:"", UID:"5cc67e78-541e-4794-9086-b55b57263fd2", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"856dd64f49", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c", Pod:"calico-kube-controllers-856dd64f49-b8qsw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali24d700ef1c1", MAC:"ba:4a:4d:1e:e6:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:18.212574 containerd[2015]: 2025-11-23 23:00:18.198 [INFO][4802] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" Namespace="calico-system" Pod="calico-kube-controllers-856dd64f49-b8qsw" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--kube--controllers--856dd64f49--b8qsw-eth0" Nov 23 23:00:18.287298 systemd-networkd[1821]: cali0c4b7e802fc: Link UP Nov 23 23:00:18.287866 systemd-networkd[1821]: cali0c4b7e802fc: Gained carrier Nov 23 23:00:18.288773 (udev-worker)[4708]: Network interface NamePolicy= disabled on kernel command line. Nov 23 23:00:18.315780 containerd[2015]: time="2025-11-23T23:00:18.315687323Z" level=info msg="connecting to shim ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c" address="unix:///run/containerd/s/849524c82937e004410a3f33f34b7a296361651f15cbf8392b27588556a6e81e" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:00:18.376534 containerd[2015]: 2025-11-23 23:00:17.945 [INFO][4828] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:00:18.376534 containerd[2015]: 2025-11-23 23:00:17.994 [INFO][4828] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0 whisker-585dccbd85- calico-system 3ebffe5d-c943-4ecd-a570-27b8df3681f4 980 0 2025-11-23 23:00:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:585dccbd85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-27 whisker-585dccbd85-tdmw2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0c4b7e802fc [] [] }} ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Namespace="calico-system" Pod="whisker-585dccbd85-tdmw2" WorkloadEndpoint="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-" Nov 23 23:00:18.376534 containerd[2015]: 2025-11-23 23:00:17.994 [INFO][4828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Namespace="calico-system" Pod="whisker-585dccbd85-tdmw2" WorkloadEndpoint="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" Nov 23 23:00:18.376534 containerd[2015]: 2025-11-23 23:00:18.087 [INFO][4842] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" HandleID="k8s-pod-network.10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Workload="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" Nov 23 23:00:18.376984 containerd[2015]: 2025-11-23 23:00:18.088 [INFO][4842] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" HandleID="k8s-pod-network.10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Workload="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb200), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-27", "pod":"whisker-585dccbd85-tdmw2", "timestamp":"2025-11-23 23:00:18.087942314 +0000 UTC"}, Hostname:"ip-172-31-24-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:00:18.376984 containerd[2015]: 2025-11-23 23:00:18.088 [INFO][4842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:00:18.376984 containerd[2015]: 2025-11-23 23:00:18.088 [INFO][4842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:00:18.376984 containerd[2015]: 2025-11-23 23:00:18.088 [INFO][4842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-27' Nov 23 23:00:18.376984 containerd[2015]: 2025-11-23 23:00:18.111 [INFO][4842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" host="ip-172-31-24-27" Nov 23 23:00:18.376984 containerd[2015]: 2025-11-23 23:00:18.138 [INFO][4842] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-27" Nov 23 23:00:18.376984 containerd[2015]: 2025-11-23 23:00:18.170 [INFO][4842] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:18.376984 containerd[2015]: 2025-11-23 23:00:18.180 [INFO][4842] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:18.376984 containerd[2015]: 2025-11-23 23:00:18.195 [INFO][4842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:18.378245 containerd[2015]: 2025-11-23 23:00:18.196 [INFO][4842] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" host="ip-172-31-24-27" Nov 23 23:00:18.378245 containerd[2015]: 2025-11-23 23:00:18.205 [INFO][4842] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0 Nov 23 23:00:18.378245 containerd[2015]: 2025-11-23 23:00:18.232 [INFO][4842] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" host="ip-172-31-24-27" Nov 23 23:00:18.378245 containerd[2015]: 2025-11-23 23:00:18.253 [INFO][4842] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.194/26] block=192.168.122.192/26 handle="k8s-pod-network.10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" host="ip-172-31-24-27" Nov 23 23:00:18.378245 containerd[2015]: 2025-11-23 23:00:18.254 [INFO][4842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.194/26] handle="k8s-pod-network.10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" host="ip-172-31-24-27" Nov 23 23:00:18.378245 containerd[2015]: 2025-11-23 23:00:18.254 [INFO][4842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:00:18.378245 containerd[2015]: 2025-11-23 23:00:18.254 [INFO][4842] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.194/26] IPv6=[] ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" HandleID="k8s-pod-network.10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Workload="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" Nov 23 23:00:18.378590 containerd[2015]: 2025-11-23 23:00:18.275 [INFO][4828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Namespace="calico-system" Pod="whisker-585dccbd85-tdmw2" WorkloadEndpoint="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0", GenerateName:"whisker-585dccbd85-", Namespace:"calico-system", SelfLink:"", UID:"3ebffe5d-c943-4ecd-a570-27b8df3681f4", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585dccbd85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"", Pod:"whisker-585dccbd85-tdmw2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0c4b7e802fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:18.378590 containerd[2015]: 2025-11-23 23:00:18.275 [INFO][4828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.194/32] ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Namespace="calico-system" Pod="whisker-585dccbd85-tdmw2" WorkloadEndpoint="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" Nov 23 23:00:18.380684 containerd[2015]: 2025-11-23 23:00:18.275 [INFO][4828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c4b7e802fc ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Namespace="calico-system" Pod="whisker-585dccbd85-tdmw2" WorkloadEndpoint="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" Nov 23 23:00:18.380684 containerd[2015]: 2025-11-23 23:00:18.287 [INFO][4828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Namespace="calico-system" Pod="whisker-585dccbd85-tdmw2" WorkloadEndpoint="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" Nov 23 23:00:18.380861 containerd[2015]: 2025-11-23 23:00:18.307 [INFO][4828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Namespace="calico-system" Pod="whisker-585dccbd85-tdmw2" WorkloadEndpoint="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0", GenerateName:"whisker-585dccbd85-", Namespace:"calico-system", SelfLink:"", UID:"3ebffe5d-c943-4ecd-a570-27b8df3681f4", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 0, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585dccbd85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0", Pod:"whisker-585dccbd85-tdmw2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0c4b7e802fc", MAC:"2e:f4:94:ea:3b:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:18.381009 containerd[2015]: 2025-11-23 23:00:18.361 [INFO][4828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" Namespace="calico-system" Pod="whisker-585dccbd85-tdmw2" WorkloadEndpoint="ip--172--31--24--27-k8s-whisker--585dccbd85--tdmw2-eth0" Nov 23 23:00:18.410036 systemd[1]: Started cri-containerd-ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c.scope - libcontainer container ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c. Nov 23 23:00:18.466275 containerd[2015]: time="2025-11-23T23:00:18.464635261Z" level=info msg="connecting to shim 10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0" address="unix:///run/containerd/s/55e47f3911f97c94c2ae2a6aa1ad6d5d50fd3f7dfccaa2898e5381ed47df3a66" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:00:18.561171 systemd[1]: Started cri-containerd-10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0.scope - libcontainer container 10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0. Nov 23 23:00:18.719935 containerd[2015]: time="2025-11-23T23:00:18.719683655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-856dd64f49-b8qsw,Uid:5cc67e78-541e-4794-9086-b55b57263fd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad81043c98c6133af1ca6da966eb84f5331a892959313a9c6fd6e4d31a7ec45c\"" Nov 23 23:00:18.726217 containerd[2015]: time="2025-11-23T23:00:18.725900026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:00:18.893134 containerd[2015]: time="2025-11-23T23:00:18.893075485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585dccbd85-tdmw2,Uid:3ebffe5d-c943-4ecd-a570-27b8df3681f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"10a1ba86f411f452ac3e33c7b16461d9e6ce5d6b755af355d4653332267e9fe0\"" Nov 23 23:00:19.276012 containerd[2015]: time="2025-11-23T23:00:19.275933381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:19.279267 containerd[2015]: time="2025-11-23T23:00:19.279176418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:00:19.279267 containerd[2015]: time="2025-11-23T23:00:19.279220192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:00:19.279949 kubelet[3557]: E1123 23:00:19.279817 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:19.281951 kubelet[3557]: E1123 23:00:19.279912 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:19.282674 containerd[2015]: time="2025-11-23T23:00:19.282251370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:00:19.283553 kubelet[3557]: E1123 23:00:19.283286 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-876ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-856dd64f49-b8qsw_calico-system(5cc67e78-541e-4794-9086-b55b57263fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:19.285870 kubelet[3557]: E1123 23:00:19.285405 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:00:19.664961 systemd-networkd[1821]: cali24d700ef1c1: Gained IPv6LL Nov 23 23:00:19.676299 containerd[2015]: time="2025-11-23T23:00:19.675867855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7qjg8,Uid:58da0435-c510-4733-869f-85a4fe15eaf3,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:19.688694 containerd[2015]: time="2025-11-23T23:00:19.688647774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrc5l,Uid:722678b5-99d5-4055-8f3f-165568ddd9d9,Namespace:kube-system,Attempt:0,}" Nov 23 23:00:19.854071 containerd[2015]: time="2025-11-23T23:00:19.853641503Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:19.859657 containerd[2015]: time="2025-11-23T23:00:19.859356071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:00:19.860579 containerd[2015]: time="2025-11-23T23:00:19.860519752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:00:19.861392 kubelet[3557]: E1123 23:00:19.861060 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:00:19.862108 kubelet[3557]: E1123 23:00:19.861265 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:00:19.862811 kubelet[3557]: E1123 23:00:19.862429 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cd8ad3e85ed44ff798d8fe6459e599d3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4fdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585dccbd85-tdmw2_calico-system(3ebffe5d-c943-4ecd-a570-27b8df3681f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:19.871124 containerd[2015]: time="2025-11-23T23:00:19.870985935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:00:20.123477 kubelet[3557]: E1123 23:00:20.123308 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:00:20.199210 systemd-networkd[1821]: cali0fa7ce22fca: Link UP Nov 23 23:00:20.202388 systemd-networkd[1821]: cali0fa7ce22fca: Gained carrier Nov 23 23:00:20.256652 containerd[2015]: 2025-11-23 23:00:19.906 [INFO][5077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0 goldmane-666569f655- calico-system 58da0435-c510-4733-869f-85a4fe15eaf3 900 0 2025-11-23 22:59:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-27 goldmane-666569f655-7qjg8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0fa7ce22fca [] [] }} ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Namespace="calico-system" Pod="goldmane-666569f655-7qjg8" WorkloadEndpoint="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-" Nov 23 23:00:20.256652 containerd[2015]: 2025-11-23 23:00:19.906 [INFO][5077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Namespace="calico-system" Pod="goldmane-666569f655-7qjg8" WorkloadEndpoint="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" Nov 23 23:00:20.256652 containerd[2015]: 2025-11-23 23:00:20.033 [INFO][5106] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" HandleID="k8s-pod-network.7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Workload="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" Nov 23 23:00:20.257251 containerd[2015]: 2025-11-23 23:00:20.033 [INFO][5106] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" HandleID="k8s-pod-network.7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Workload="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000310ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-27", "pod":"goldmane-666569f655-7qjg8", "timestamp":"2025-11-23 23:00:20.033076901 +0000 UTC"}, Hostname:"ip-172-31-24-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:00:20.257251 containerd[2015]: 2025-11-23 23:00:20.034 [INFO][5106] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:00:20.257251 containerd[2015]: 2025-11-23 23:00:20.035 [INFO][5106] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:00:20.257251 containerd[2015]: 2025-11-23 23:00:20.035 [INFO][5106] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-27' Nov 23 23:00:20.257251 containerd[2015]: 2025-11-23 23:00:20.077 [INFO][5106] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" host="ip-172-31-24-27" Nov 23 23:00:20.257251 containerd[2015]: 2025-11-23 23:00:20.097 [INFO][5106] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-27" Nov 23 23:00:20.257251 containerd[2015]: 2025-11-23 23:00:20.112 [INFO][5106] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:20.257251 containerd[2015]: 2025-11-23 23:00:20.118 [INFO][5106] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:20.257251 containerd[2015]: 2025-11-23 23:00:20.128 [INFO][5106] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:20.257718 containerd[2015]: 2025-11-23 23:00:20.129 [INFO][5106] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" host="ip-172-31-24-27" Nov 23 23:00:20.257718 containerd[2015]: 2025-11-23 23:00:20.138 [INFO][5106] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a Nov 23 23:00:20.257718 containerd[2015]: 2025-11-23 23:00:20.154 [INFO][5106] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" host="ip-172-31-24-27" Nov 23 23:00:20.257718 containerd[2015]: 2025-11-23 23:00:20.167 [INFO][5106] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.195/26] block=192.168.122.192/26 handle="k8s-pod-network.7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" host="ip-172-31-24-27" Nov 23 23:00:20.257718 containerd[2015]: 2025-11-23 23:00:20.167 [INFO][5106] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.195/26] handle="k8s-pod-network.7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" host="ip-172-31-24-27" Nov 23 23:00:20.257718 containerd[2015]: 2025-11-23 23:00:20.168 [INFO][5106] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:00:20.257718 containerd[2015]: 2025-11-23 23:00:20.168 [INFO][5106] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.195/26] IPv6=[] ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" HandleID="k8s-pod-network.7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Workload="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" Nov 23 23:00:20.259563 containerd[2015]: 2025-11-23 23:00:20.178 [INFO][5077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Namespace="calico-system" Pod="goldmane-666569f655-7qjg8" WorkloadEndpoint="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"58da0435-c510-4733-869f-85a4fe15eaf3", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"", Pod:"goldmane-666569f655-7qjg8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0fa7ce22fca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:20.259563 containerd[2015]: 2025-11-23 23:00:20.178 [INFO][5077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.195/32] ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Namespace="calico-system" Pod="goldmane-666569f655-7qjg8" WorkloadEndpoint="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" Nov 23 23:00:20.261137 containerd[2015]: 2025-11-23 23:00:20.178 [INFO][5077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fa7ce22fca ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Namespace="calico-system" Pod="goldmane-666569f655-7qjg8" WorkloadEndpoint="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" Nov 23 23:00:20.261137 containerd[2015]: 2025-11-23 23:00:20.203 [INFO][5077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Namespace="calico-system" Pod="goldmane-666569f655-7qjg8" WorkloadEndpoint="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" Nov 23 23:00:20.261457 containerd[2015]: 2025-11-23 23:00:20.203 [INFO][5077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Namespace="calico-system" Pod="goldmane-666569f655-7qjg8" WorkloadEndpoint="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"58da0435-c510-4733-869f-85a4fe15eaf3", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a", Pod:"goldmane-666569f655-7qjg8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0fa7ce22fca", MAC:"4a:39:ed:8f:f1:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:20.261604 containerd[2015]: 2025-11-23 23:00:20.252 [INFO][5077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" Namespace="calico-system" Pod="goldmane-666569f655-7qjg8" WorkloadEndpoint="ip--172--31--24--27-k8s-goldmane--666569f655--7qjg8-eth0" Nov 23 23:00:20.304940 systemd-networkd[1821]: cali0c4b7e802fc: Gained IPv6LL Nov 23 23:00:20.319257 containerd[2015]: time="2025-11-23T23:00:20.319167817Z" level=info msg="connecting to shim 7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a" address="unix:///run/containerd/s/0f5fb70cf25a40d7ca2243c8fd54cc3e593fccb50f710986eb3cd1cb7c42f20c" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:00:20.335083 systemd-networkd[1821]: cali49dbe1c102c: Link UP Nov 23 23:00:20.339583 systemd-networkd[1821]: cali49dbe1c102c: Gained carrier Nov 23 23:00:20.409506 containerd[2015]: time="2025-11-23T23:00:20.409358282Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:20.415447 containerd[2015]: time="2025-11-23T23:00:20.415363983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:00:20.415672 containerd[2015]: time="2025-11-23T23:00:20.415412067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:00:20.417882 kubelet[3557]: E1123 23:00:20.417097 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:00:20.417882 kubelet[3557]: E1123 23:00:20.417232 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:00:20.418441 kubelet[3557]: E1123 23:00:20.418289 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4fdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585dccbd85-tdmw2_calico-system(3ebffe5d-c943-4ecd-a570-27b8df3681f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:20.420934 kubelet[3557]: E1123 23:00:20.420852 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:00:20.427035 systemd[1]: Started cri-containerd-7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a.scope - libcontainer container 7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a. Nov 23 23:00:20.478909 containerd[2015]: 2025-11-23 23:00:19.987 [INFO][5087] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0 coredns-674b8bbfcf- kube-system 722678b5-99d5-4055-8f3f-165568ddd9d9 896 0 2025-11-23 22:59:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-27 coredns-674b8bbfcf-nrc5l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali49dbe1c102c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrc5l" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-" Nov 23 23:00:20.478909 containerd[2015]: 2025-11-23 23:00:19.988 [INFO][5087] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrc5l" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" Nov 23 23:00:20.478909 containerd[2015]: 2025-11-23 23:00:20.124 [INFO][5118] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" HandleID="k8s-pod-network.2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Workload="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" Nov 23 23:00:20.479250 containerd[2015]: 2025-11-23 23:00:20.125 [INFO][5118] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" HandleID="k8s-pod-network.2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Workload="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001d00f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-27", "pod":"coredns-674b8bbfcf-nrc5l", "timestamp":"2025-11-23 23:00:20.124983888 +0000 UTC"}, Hostname:"ip-172-31-24-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:00:20.479250 containerd[2015]: 2025-11-23 23:00:20.125 [INFO][5118] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:00:20.479250 containerd[2015]: 2025-11-23 23:00:20.168 [INFO][5118] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:00:20.479250 containerd[2015]: 2025-11-23 23:00:20.168 [INFO][5118] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-27' Nov 23 23:00:20.479250 containerd[2015]: 2025-11-23 23:00:20.192 [INFO][5118] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" host="ip-172-31-24-27" Nov 23 23:00:20.479250 containerd[2015]: 2025-11-23 23:00:20.207 [INFO][5118] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-27" Nov 23 23:00:20.479250 containerd[2015]: 2025-11-23 23:00:20.234 [INFO][5118] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:20.479250 containerd[2015]: 2025-11-23 23:00:20.256 [INFO][5118] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:20.479250 containerd[2015]: 2025-11-23 23:00:20.267 [INFO][5118] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:20.479678 containerd[2015]: 2025-11-23 23:00:20.267 [INFO][5118] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" host="ip-172-31-24-27" Nov 23 23:00:20.479678 containerd[2015]: 2025-11-23 23:00:20.272 [INFO][5118] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97 Nov 23 23:00:20.479678 containerd[2015]: 2025-11-23 23:00:20.282 [INFO][5118] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" host="ip-172-31-24-27" Nov 23 23:00:20.479678 containerd[2015]: 2025-11-23 23:00:20.318 [INFO][5118] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.196/26] block=192.168.122.192/26 handle="k8s-pod-network.2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" host="ip-172-31-24-27" Nov 23 23:00:20.479678 containerd[2015]: 2025-11-23 23:00:20.318 [INFO][5118] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.196/26] handle="k8s-pod-network.2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" host="ip-172-31-24-27" Nov 23 23:00:20.479678 containerd[2015]: 2025-11-23 23:00:20.318 [INFO][5118] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:00:20.479678 containerd[2015]: 2025-11-23 23:00:20.318 [INFO][5118] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.196/26] IPv6=[] ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" HandleID="k8s-pod-network.2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Workload="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" Nov 23 23:00:20.480130 containerd[2015]: 2025-11-23 23:00:20.327 [INFO][5087] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrc5l" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"722678b5-99d5-4055-8f3f-165568ddd9d9", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"", Pod:"coredns-674b8bbfcf-nrc5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49dbe1c102c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:20.480130 containerd[2015]: 2025-11-23 23:00:20.327 [INFO][5087] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.196/32] ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrc5l" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" Nov 23 23:00:20.480130 containerd[2015]: 2025-11-23 23:00:20.328 [INFO][5087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49dbe1c102c ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrc5l" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" Nov 23 23:00:20.480130 containerd[2015]: 2025-11-23 23:00:20.341 [INFO][5087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrc5l" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" Nov 23 23:00:20.480130 containerd[2015]: 2025-11-23 23:00:20.348 [INFO][5087] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrc5l" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"722678b5-99d5-4055-8f3f-165568ddd9d9", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97", Pod:"coredns-674b8bbfcf-nrc5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali49dbe1c102c", MAC:"6a:77:21:88:1f:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:20.480130 containerd[2015]: 2025-11-23 23:00:20.472 [INFO][5087] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" Namespace="kube-system" Pod="coredns-674b8bbfcf-nrc5l" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--nrc5l-eth0" Nov 23 23:00:20.549201 containerd[2015]: time="2025-11-23T23:00:20.549122374Z" level=info msg="connecting to shim 2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97" address="unix:///run/containerd/s/adfb6643a1af20458da417ffe4ebf91f64f5e0ed3c2fe8b5f647c2753f92bf6c" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:00:20.652032 systemd[1]: Started cri-containerd-2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97.scope - libcontainer container 2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97. Nov 23 23:00:20.790110 containerd[2015]: time="2025-11-23T23:00:20.790004301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nrc5l,Uid:722678b5-99d5-4055-8f3f-165568ddd9d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97\"" Nov 23 23:00:20.805213 containerd[2015]: time="2025-11-23T23:00:20.805125041Z" level=info msg="CreateContainer within sandbox \"2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:00:20.836774 containerd[2015]: time="2025-11-23T23:00:20.834065516Z" level=info msg="Container 066c4fd96e735148699c4e027a188a67d85ba0f94e5304fc0e9b79865b244290: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:20.855353 containerd[2015]: time="2025-11-23T23:00:20.853249916Z" level=info msg="CreateContainer within sandbox \"2f0a39ed7986f6ce4ab619bf35d1a293b857c3fbc067e183d0c49ff3443fdd97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"066c4fd96e735148699c4e027a188a67d85ba0f94e5304fc0e9b79865b244290\"" Nov 23 23:00:20.855696 containerd[2015]: time="2025-11-23T23:00:20.855584447Z" level=info msg="StartContainer for \"066c4fd96e735148699c4e027a188a67d85ba0f94e5304fc0e9b79865b244290\"" Nov 23 23:00:20.858870 containerd[2015]: time="2025-11-23T23:00:20.858699932Z" level=info msg="connecting to shim 066c4fd96e735148699c4e027a188a67d85ba0f94e5304fc0e9b79865b244290" address="unix:///run/containerd/s/adfb6643a1af20458da417ffe4ebf91f64f5e0ed3c2fe8b5f647c2753f92bf6c" protocol=ttrpc version=3 Nov 23 23:00:20.926418 systemd[1]: Started cri-containerd-066c4fd96e735148699c4e027a188a67d85ba0f94e5304fc0e9b79865b244290.scope - libcontainer container 066c4fd96e735148699c4e027a188a67d85ba0f94e5304fc0e9b79865b244290. Nov 23 23:00:20.999288 containerd[2015]: time="2025-11-23T23:00:20.998927551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7qjg8,Uid:58da0435-c510-4733-869f-85a4fe15eaf3,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c0109ac5abcec2ca6ac325c9a629eac61342aa9886a95cb72957192b9c3855a\"" Nov 23 23:00:21.010110 containerd[2015]: time="2025-11-23T23:00:21.010045324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:00:21.085005 containerd[2015]: time="2025-11-23T23:00:21.084277450Z" level=info msg="StartContainer for \"066c4fd96e735148699c4e027a188a67d85ba0f94e5304fc0e9b79865b244290\" returns successfully" Nov 23 23:00:21.162633 kubelet[3557]: E1123 23:00:21.162530 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:00:21.277009 kubelet[3557]: I1123 23:00:21.276910 3557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nrc5l" podStartSLOduration=54.276885843 podStartE2EDuration="54.276885843s" podCreationTimestamp="2025-11-23 22:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:00:21.276241517 +0000 UTC m=+57.998541203" watchObservedRunningTime="2025-11-23 23:00:21.276885843 +0000 UTC m=+57.999185421" Nov 23 23:00:21.408788 containerd[2015]: time="2025-11-23T23:00:21.408603130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:21.411201 containerd[2015]: time="2025-11-23T23:00:21.410930409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:00:21.411201 containerd[2015]: time="2025-11-23T23:00:21.411105768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:21.411970 kubelet[3557]: E1123 23:00:21.411344 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:00:21.411970 kubelet[3557]: E1123 23:00:21.411424 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:00:21.411970 kubelet[3557]: E1123 23:00:21.411616 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffwh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7qjg8_calico-system(58da0435-c510-4733-869f-85a4fe15eaf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:21.413263 kubelet[3557]: E1123 23:00:21.412925 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:00:21.444218 systemd-networkd[1821]: vxlan.calico: Link UP Nov 23 23:00:21.444240 systemd-networkd[1821]: vxlan.calico: Gained carrier Nov 23 23:00:21.520956 systemd-networkd[1821]: cali49dbe1c102c: Gained IPv6LL Nov 23 23:00:22.152463 kubelet[3557]: E1123 23:00:22.152360 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:00:22.161046 systemd-networkd[1821]: cali0fa7ce22fca: Gained IPv6LL Nov 23 23:00:23.056992 systemd-networkd[1821]: vxlan.calico: Gained IPv6LL Nov 23 23:00:24.304197 systemd[1]: Started sshd@9-172.31.24.27:22-139.178.68.195:55756.service - OpenSSH per-connection server daemon (139.178.68.195:55756). Nov 23 23:00:24.533894 sshd[5376]: Accepted publickey for core from 139.178.68.195 port 55756 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:24.537104 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:24.547490 systemd-logind[1980]: New session 10 of user core. Nov 23 23:00:24.554003 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:00:24.870791 sshd[5379]: Connection closed by 139.178.68.195 port 55756 Nov 23 23:00:24.872049 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:24.880454 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:00:24.884274 systemd[1]: sshd@9-172.31.24.27:22-139.178.68.195:55756.service: Deactivated successfully. Nov 23 23:00:24.894689 systemd-logind[1980]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:00:24.900109 systemd-logind[1980]: Removed session 10. Nov 23 23:00:25.737428 ntpd[2231]: Listen normally on 6 vxlan.calico 192.168.122.192:123 Nov 23 23:00:25.738265 ntpd[2231]: 23 Nov 23:00:25 ntpd[2231]: Listen normally on 6 vxlan.calico 192.168.122.192:123 Nov 23 23:00:25.738265 ntpd[2231]: 23 Nov 23:00:25 ntpd[2231]: Listen normally on 7 cali24d700ef1c1 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 23 23:00:25.738265 ntpd[2231]: 23 Nov 23:00:25 ntpd[2231]: Listen normally on 8 cali0c4b7e802fc [fe80::ecee:eeff:feee:eeee%5]:123 Nov 23 23:00:25.738265 ntpd[2231]: 23 Nov 23:00:25 ntpd[2231]: Listen normally on 9 cali0fa7ce22fca [fe80::ecee:eeff:feee:eeee%6]:123 Nov 23 23:00:25.738265 ntpd[2231]: 23 Nov 23:00:25 ntpd[2231]: Listen normally on 10 cali49dbe1c102c [fe80::ecee:eeff:feee:eeee%7]:123 Nov 23 23:00:25.738265 ntpd[2231]: 23 Nov 23:00:25 ntpd[2231]: Listen normally on 11 vxlan.calico [fe80::641c:72ff:fe60:9b1b%8]:123 Nov 23 23:00:25.737527 ntpd[2231]: Listen normally on 7 cali24d700ef1c1 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 23 23:00:25.737575 ntpd[2231]: Listen normally on 8 cali0c4b7e802fc [fe80::ecee:eeff:feee:eeee%5]:123 Nov 23 23:00:25.737619 ntpd[2231]: Listen normally on 9 cali0fa7ce22fca [fe80::ecee:eeff:feee:eeee%6]:123 Nov 23 23:00:25.737661 ntpd[2231]: Listen normally on 10 cali49dbe1c102c [fe80::ecee:eeff:feee:eeee%7]:123 Nov 23 23:00:25.737704 ntpd[2231]: Listen normally on 11 vxlan.calico [fe80::641c:72ff:fe60:9b1b%8]:123 Nov 23 23:00:27.674183 containerd[2015]: time="2025-11-23T23:00:27.673654533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xflhj,Uid:39270bf4-b6a6-4d62-8a14-a5e6fd018861,Namespace:calico-system,Attempt:0,}" Nov 23 23:00:27.674183 containerd[2015]: time="2025-11-23T23:00:27.673966822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6574bf4f5d-qz2dt,Uid:1265050f-2f3c-4c9a-a19e-43d1823e072d,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:00:27.974978 systemd-networkd[1821]: calicc00464f75f: Link UP Nov 23 23:00:27.978296 systemd-networkd[1821]: calicc00464f75f: Gained carrier Nov 23 23:00:27.990142 (udev-worker)[5444]: Network interface NamePolicy= disabled on kernel command line. Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.792 [INFO][5408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0 calico-apiserver-6574bf4f5d- calico-apiserver 1265050f-2f3c-4c9a-a19e-43d1823e072d 903 0 2025-11-23 22:59:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6574bf4f5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-27 calico-apiserver-6574bf4f5d-qz2dt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicc00464f75f [] [] }} ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Namespace="calico-apiserver" Pod="calico-apiserver-6574bf4f5d-qz2dt" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.794 [INFO][5408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Namespace="calico-apiserver" Pod="calico-apiserver-6574bf4f5d-qz2dt" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.867 [INFO][5429] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" HandleID="k8s-pod-network.7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Workload="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.867 [INFO][5429] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" HandleID="k8s-pod-network.7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Workload="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000315700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-27", "pod":"calico-apiserver-6574bf4f5d-qz2dt", "timestamp":"2025-11-23 23:00:27.867007455 +0000 UTC"}, Hostname:"ip-172-31-24-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.867 [INFO][5429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.867 [INFO][5429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.867 [INFO][5429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-27' Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.885 [INFO][5429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" host="ip-172-31-24-27" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.903 [INFO][5429] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-27" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.918 [INFO][5429] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.923 [INFO][5429] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.929 [INFO][5429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.929 [INFO][5429] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" host="ip-172-31-24-27" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.931 [INFO][5429] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.940 [INFO][5429] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" host="ip-172-31-24-27" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.953 [INFO][5429] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.197/26] block=192.168.122.192/26 handle="k8s-pod-network.7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" host="ip-172-31-24-27" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.953 [INFO][5429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.197/26] handle="k8s-pod-network.7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" host="ip-172-31-24-27" Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.954 [INFO][5429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:00:28.042217 containerd[2015]: 2025-11-23 23:00:27.954 [INFO][5429] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.197/26] IPv6=[] ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" HandleID="k8s-pod-network.7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Workload="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" Nov 23 23:00:28.044293 containerd[2015]: 2025-11-23 23:00:27.960 [INFO][5408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Namespace="calico-apiserver" Pod="calico-apiserver-6574bf4f5d-qz2dt" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0", GenerateName:"calico-apiserver-6574bf4f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"1265050f-2f3c-4c9a-a19e-43d1823e072d", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6574bf4f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"", Pod:"calico-apiserver-6574bf4f5d-qz2dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc00464f75f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:28.044293 containerd[2015]: 2025-11-23 23:00:27.960 [INFO][5408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.197/32] ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Namespace="calico-apiserver" Pod="calico-apiserver-6574bf4f5d-qz2dt" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" Nov 23 23:00:28.044293 containerd[2015]: 2025-11-23 23:00:27.960 [INFO][5408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc00464f75f ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Namespace="calico-apiserver" Pod="calico-apiserver-6574bf4f5d-qz2dt" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" Nov 23 23:00:28.044293 containerd[2015]: 2025-11-23 23:00:27.979 [INFO][5408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Namespace="calico-apiserver" Pod="calico-apiserver-6574bf4f5d-qz2dt" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" Nov 23 23:00:28.044293 containerd[2015]: 2025-11-23 23:00:27.984 [INFO][5408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Namespace="calico-apiserver" Pod="calico-apiserver-6574bf4f5d-qz2dt" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0", GenerateName:"calico-apiserver-6574bf4f5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"1265050f-2f3c-4c9a-a19e-43d1823e072d", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6574bf4f5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd", Pod:"calico-apiserver-6574bf4f5d-qz2dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc00464f75f", MAC:"fa:02:0d:ec:98:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:28.044293 containerd[2015]: 2025-11-23 23:00:28.027 [INFO][5408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" Namespace="calico-apiserver" Pod="calico-apiserver-6574bf4f5d-qz2dt" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--6574bf4f5d--qz2dt-eth0" Nov 23 23:00:28.119196 containerd[2015]: time="2025-11-23T23:00:28.119112734Z" level=info msg="connecting to shim 7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd" address="unix:///run/containerd/s/8c332bd53a692e434b544be9d8d8fd8d49bb220ba937a4872cae26cef0afcede" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:00:28.133889 systemd-networkd[1821]: cali3ca3d8cf193: Link UP Nov 23 23:00:28.134341 systemd-networkd[1821]: cali3ca3d8cf193: Gained carrier Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:27.806 [INFO][5404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0 csi-node-driver- calico-system 39270bf4-b6a6-4d62-8a14-a5e6fd018861 797 0 2025-11-23 22:59:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-27 csi-node-driver-xflhj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3ca3d8cf193 [] [] }} ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Namespace="calico-system" Pod="csi-node-driver-xflhj" WorkloadEndpoint="ip--172--31--24--27-k8s-csi--node--driver--xflhj-" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:27.807 [INFO][5404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Namespace="calico-system" Pod="csi-node-driver-xflhj" WorkloadEndpoint="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:27.917 [INFO][5434] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" HandleID="k8s-pod-network.1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Workload="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:27.917 [INFO][5434] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" HandleID="k8s-pod-network.1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Workload="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-27", "pod":"csi-node-driver-xflhj", "timestamp":"2025-11-23 23:00:27.917171285 +0000 UTC"}, Hostname:"ip-172-31-24-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:27.917 [INFO][5434] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:27.954 [INFO][5434] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:27.954 [INFO][5434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-27' Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.011 [INFO][5434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" host="ip-172-31-24-27" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.046 [INFO][5434] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-27" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.060 [INFO][5434] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.064 [INFO][5434] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.071 [INFO][5434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.071 [INFO][5434] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" host="ip-172-31-24-27" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.076 [INFO][5434] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.088 [INFO][5434] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" host="ip-172-31-24-27" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.119 [INFO][5434] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.198/26] block=192.168.122.192/26 handle="k8s-pod-network.1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" host="ip-172-31-24-27" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.119 [INFO][5434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.198/26] handle="k8s-pod-network.1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" host="ip-172-31-24-27" Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.119 [INFO][5434] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:00:28.192122 containerd[2015]: 2025-11-23 23:00:28.120 [INFO][5434] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.198/26] IPv6=[] ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" HandleID="k8s-pod-network.1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Workload="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" Nov 23 23:00:28.193245 containerd[2015]: 2025-11-23 23:00:28.127 [INFO][5404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Namespace="calico-system" Pod="csi-node-driver-xflhj" WorkloadEndpoint="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39270bf4-b6a6-4d62-8a14-a5e6fd018861", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"", Pod:"csi-node-driver-xflhj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ca3d8cf193", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:28.193245 containerd[2015]: 2025-11-23 23:00:28.127 [INFO][5404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.198/32] ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Namespace="calico-system" Pod="csi-node-driver-xflhj" WorkloadEndpoint="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" Nov 23 23:00:28.193245 containerd[2015]: 2025-11-23 23:00:28.127 [INFO][5404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ca3d8cf193 ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Namespace="calico-system" Pod="csi-node-driver-xflhj" WorkloadEndpoint="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" Nov 23 23:00:28.193245 containerd[2015]: 2025-11-23 23:00:28.133 [INFO][5404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Namespace="calico-system" Pod="csi-node-driver-xflhj" WorkloadEndpoint="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" Nov 23 23:00:28.193245 containerd[2015]: 2025-11-23 23:00:28.138 [INFO][5404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Namespace="calico-system" Pod="csi-node-driver-xflhj" WorkloadEndpoint="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"39270bf4-b6a6-4d62-8a14-a5e6fd018861", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c", Pod:"csi-node-driver-xflhj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3ca3d8cf193", MAC:"1e:4f:98:a0:b5:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:28.193245 containerd[2015]: 2025-11-23 23:00:28.181 [INFO][5404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" Namespace="calico-system" Pod="csi-node-driver-xflhj" WorkloadEndpoint="ip--172--31--24--27-k8s-csi--node--driver--xflhj-eth0" Nov 23 23:00:28.236341 systemd[1]: Started cri-containerd-7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd.scope - libcontainer container 7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd. Nov 23 23:00:28.277375 containerd[2015]: time="2025-11-23T23:00:28.277189578Z" level=info msg="connecting to shim 1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c" address="unix:///run/containerd/s/d3468deaa4fc4a43e552425beb5c4ebccf895e53415761d2ec6e9ea469b5a698" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:00:28.336027 systemd[1]: Started cri-containerd-1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c.scope - libcontainer container 1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c. Nov 23 23:00:28.419417 containerd[2015]: time="2025-11-23T23:00:28.419253348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6574bf4f5d-qz2dt,Uid:1265050f-2f3c-4c9a-a19e-43d1823e072d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7bcd416fc3486bcbb47970b9750a40a394a940afdd388b62e4bd5c3092516cfd\"" Nov 23 23:00:28.423528 containerd[2015]: time="2025-11-23T23:00:28.422644094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:28.437979 containerd[2015]: time="2025-11-23T23:00:28.437919460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xflhj,Uid:39270bf4-b6a6-4d62-8a14-a5e6fd018861,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e261a47c2ec8139608f52b7a2e7ea08527a0132719d4c70ffbb7d7e7af9e49c\"" Nov 23 23:00:28.663980 containerd[2015]: time="2025-11-23T23:00:28.663763728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:28.666037 containerd[2015]: time="2025-11-23T23:00:28.665962242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:28.666258 containerd[2015]: time="2025-11-23T23:00:28.666018214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:28.666762 kubelet[3557]: E1123 23:00:28.666599 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:28.666762 kubelet[3557]: E1123 23:00:28.666667 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:28.668193 kubelet[3557]: E1123 23:00:28.667115 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqcn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6574bf4f5d-qz2dt_calico-apiserver(1265050f-2f3c-4c9a-a19e-43d1823e072d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:28.668774 kubelet[3557]: E1123 23:00:28.668486 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:00:28.669178 containerd[2015]: time="2025-11-23T23:00:28.669055864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:00:28.903927 containerd[2015]: time="2025-11-23T23:00:28.903838551Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:28.906114 containerd[2015]: time="2025-11-23T23:00:28.906005622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:00:28.906231 containerd[2015]: time="2025-11-23T23:00:28.906121167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:00:28.906480 kubelet[3557]: E1123 23:00:28.906427 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:00:28.906565 kubelet[3557]: E1123 23:00:28.906494 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:00:28.906822 kubelet[3557]: E1123 23:00:28.906714 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:28.910407 containerd[2015]: time="2025-11-23T23:00:28.910350449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:00:29.183925 kubelet[3557]: E1123 23:00:29.183812 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:00:29.220444 containerd[2015]: time="2025-11-23T23:00:29.220297263Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:29.223207 containerd[2015]: time="2025-11-23T23:00:29.223092512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:00:29.223910 containerd[2015]: time="2025-11-23T23:00:29.223251808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:00:29.224694 kubelet[3557]: E1123 23:00:29.224350 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:00:29.224694 kubelet[3557]: E1123 23:00:29.224415 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:00:29.224694 kubelet[3557]: E1123 23:00:29.224595 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:29.225908 kubelet[3557]: E1123 23:00:29.225844 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:00:29.674944 containerd[2015]: time="2025-11-23T23:00:29.674602013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-gfvhg,Uid:a3a85933-215d-434f-beb8-3b039c057228,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:00:29.841104 systemd-networkd[1821]: cali3ca3d8cf193: Gained IPv6LL Nov 23 23:00:29.901675 systemd-networkd[1821]: calid90df9f72d4: Link UP Nov 23 23:00:29.904650 systemd-networkd[1821]: calid90df9f72d4: Gained carrier Nov 23 23:00:29.905310 systemd-networkd[1821]: calicc00464f75f: Gained IPv6LL Nov 23 23:00:29.917476 systemd[1]: Started sshd@10-172.31.24.27:22-139.178.68.195:55768.service - OpenSSH per-connection server daemon (139.178.68.195:55768). Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.751 [INFO][5559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0 calico-apiserver-7fcd9fb754- calico-apiserver a3a85933-215d-434f-beb8-3b039c057228 901 0 2025-11-23 22:59:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcd9fb754 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-27 calico-apiserver-7fcd9fb754-gfvhg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid90df9f72d4 [] [] }} ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-gfvhg" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.751 [INFO][5559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-gfvhg" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.808 [INFO][5572] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" HandleID="k8s-pod-network.d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Workload="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.809 [INFO][5572] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" HandleID="k8s-pod-network.d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Workload="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-27", "pod":"calico-apiserver-7fcd9fb754-gfvhg", "timestamp":"2025-11-23 23:00:29.808781118 +0000 UTC"}, Hostname:"ip-172-31-24-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.809 [INFO][5572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.809 [INFO][5572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.809 [INFO][5572] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-27' Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.827 [INFO][5572] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" host="ip-172-31-24-27" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.836 [INFO][5572] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-27" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.850 [INFO][5572] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.854 [INFO][5572] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.859 [INFO][5572] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.859 [INFO][5572] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" host="ip-172-31-24-27" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.862 [INFO][5572] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.872 [INFO][5572] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" host="ip-172-31-24-27" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.886 [INFO][5572] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.199/26] block=192.168.122.192/26 handle="k8s-pod-network.d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" host="ip-172-31-24-27" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.886 [INFO][5572] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.199/26] handle="k8s-pod-network.d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" host="ip-172-31-24-27" Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.886 [INFO][5572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:00:29.958749 containerd[2015]: 2025-11-23 23:00:29.887 [INFO][5572] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.199/26] IPv6=[] ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" HandleID="k8s-pod-network.d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Workload="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" Nov 23 23:00:29.962553 containerd[2015]: 2025-11-23 23:00:29.892 [INFO][5559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-gfvhg" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0", GenerateName:"calico-apiserver-7fcd9fb754-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3a85933-215d-434f-beb8-3b039c057228", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd9fb754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"", Pod:"calico-apiserver-7fcd9fb754-gfvhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid90df9f72d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:29.962553 containerd[2015]: 2025-11-23 23:00:29.893 [INFO][5559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.199/32] ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-gfvhg" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" Nov 23 23:00:29.962553 containerd[2015]: 2025-11-23 23:00:29.893 [INFO][5559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid90df9f72d4 ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-gfvhg" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" Nov 23 23:00:29.962553 containerd[2015]: 2025-11-23 23:00:29.909 [INFO][5559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-gfvhg" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" Nov 23 23:00:29.962553 containerd[2015]: 2025-11-23 23:00:29.911 [INFO][5559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-gfvhg" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0", GenerateName:"calico-apiserver-7fcd9fb754-", Namespace:"calico-apiserver", SelfLink:"", UID:"a3a85933-215d-434f-beb8-3b039c057228", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd9fb754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb", Pod:"calico-apiserver-7fcd9fb754-gfvhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid90df9f72d4", MAC:"52:0f:19:84:d5:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:29.962553 containerd[2015]: 2025-11-23 23:00:29.940 [INFO][5559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-gfvhg" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--gfvhg-eth0" Nov 23 23:00:30.038951 containerd[2015]: time="2025-11-23T23:00:30.038877538Z" level=info msg="connecting to shim d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb" address="unix:///run/containerd/s/c6a882e815bee678792003c734f45b17219f9ce3046e39daf61fefb1f0b79c5f" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:00:30.093071 systemd[1]: Started cri-containerd-d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb.scope - libcontainer container d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb. Nov 23 23:00:30.176558 sshd[5581]: Accepted publickey for core from 139.178.68.195 port 55768 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:30.182103 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:30.199157 kubelet[3557]: E1123 23:00:30.199103 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:00:30.200338 systemd-logind[1980]: New session 11 of user core. Nov 23 23:00:30.206234 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:00:30.209136 kubelet[3557]: E1123 23:00:30.208698 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:00:30.211364 containerd[2015]: time="2025-11-23T23:00:30.211168850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-gfvhg,Uid:a3a85933-215d-434f-beb8-3b039c057228,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d23fd246b3d19630bce8277290f1838f946621b01604cc4c18c65187b11528bb\"" Nov 23 23:00:30.224956 containerd[2015]: time="2025-11-23T23:00:30.223995340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:30.502470 containerd[2015]: time="2025-11-23T23:00:30.502380601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:30.505064 containerd[2015]: time="2025-11-23T23:00:30.504896339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:30.505064 containerd[2015]: time="2025-11-23T23:00:30.504989781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:30.505430 kubelet[3557]: E1123 23:00:30.505385 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:30.505661 kubelet[3557]: E1123 23:00:30.505585 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:30.506378 kubelet[3557]: E1123 23:00:30.506267 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p89f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcd9fb754-gfvhg_calico-apiserver(a3a85933-215d-434f-beb8-3b039c057228): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:30.508061 kubelet[3557]: E1123 23:00:30.507977 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:00:30.511557 sshd[5637]: Connection closed by 139.178.68.195 port 55768 Nov 23 23:00:30.512645 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:30.524170 systemd-logind[1980]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:00:30.525164 systemd[1]: sshd@10-172.31.24.27:22-139.178.68.195:55768.service: Deactivated successfully. Nov 23 23:00:30.531040 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:00:30.538845 systemd-logind[1980]: Removed session 11. Nov 23 23:00:30.673074 containerd[2015]: time="2025-11-23T23:00:30.672992634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2244g,Uid:c32bb835-766b-4882-947e-95ccedc2df07,Namespace:kube-system,Attempt:0,}" Nov 23 23:00:30.902894 systemd-networkd[1821]: cali621172b4bcd: Link UP Nov 23 23:00:30.903689 systemd-networkd[1821]: cali621172b4bcd: Gained carrier Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.745 [INFO][5650] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0 coredns-674b8bbfcf- kube-system c32bb835-766b-4882-947e-95ccedc2df07 904 0 2025-11-23 22:59:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-27 coredns-674b8bbfcf-2244g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali621172b4bcd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Namespace="kube-system" Pod="coredns-674b8bbfcf-2244g" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.746 [INFO][5650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Namespace="kube-system" Pod="coredns-674b8bbfcf-2244g" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.798 [INFO][5661] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" HandleID="k8s-pod-network.a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Workload="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.798 [INFO][5661] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" HandleID="k8s-pod-network.a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Workload="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c9b00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-27", "pod":"coredns-674b8bbfcf-2244g", "timestamp":"2025-11-23 23:00:30.798428249 +0000 UTC"}, Hostname:"ip-172-31-24-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.798 [INFO][5661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.799 [INFO][5661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.799 [INFO][5661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-27' Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.817 [INFO][5661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" host="ip-172-31-24-27" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.828 [INFO][5661] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-27" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.836 [INFO][5661] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.840 [INFO][5661] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.864 [INFO][5661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.864 [INFO][5661] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" host="ip-172-31-24-27" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.867 [INFO][5661] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.875 [INFO][5661] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" host="ip-172-31-24-27" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.889 [INFO][5661] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.200/26] block=192.168.122.192/26 handle="k8s-pod-network.a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" host="ip-172-31-24-27" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.890 [INFO][5661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.200/26] handle="k8s-pod-network.a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" host="ip-172-31-24-27" Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.890 [INFO][5661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:00:30.936969 containerd[2015]: 2025-11-23 23:00:30.890 [INFO][5661] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.200/26] IPv6=[] ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" HandleID="k8s-pod-network.a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Workload="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" Nov 23 23:00:30.938701 containerd[2015]: 2025-11-23 23:00:30.894 [INFO][5650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Namespace="kube-system" Pod="coredns-674b8bbfcf-2244g" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c32bb835-766b-4882-947e-95ccedc2df07", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"", Pod:"coredns-674b8bbfcf-2244g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali621172b4bcd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:30.938701 containerd[2015]: 2025-11-23 23:00:30.894 [INFO][5650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.200/32] ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Namespace="kube-system" Pod="coredns-674b8bbfcf-2244g" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" Nov 23 23:00:30.938701 containerd[2015]: 2025-11-23 23:00:30.895 [INFO][5650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali621172b4bcd ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Namespace="kube-system" Pod="coredns-674b8bbfcf-2244g" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" Nov 23 23:00:30.938701 containerd[2015]: 2025-11-23 23:00:30.901 [INFO][5650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Namespace="kube-system" Pod="coredns-674b8bbfcf-2244g" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" Nov 23 23:00:30.938701 containerd[2015]: 2025-11-23 23:00:30.902 [INFO][5650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Namespace="kube-system" Pod="coredns-674b8bbfcf-2244g" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c32bb835-766b-4882-947e-95ccedc2df07", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b", Pod:"coredns-674b8bbfcf-2244g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali621172b4bcd", MAC:"4a:d7:87:d6:b9:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:30.938701 containerd[2015]: 2025-11-23 23:00:30.932 [INFO][5650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" Namespace="kube-system" Pod="coredns-674b8bbfcf-2244g" WorkloadEndpoint="ip--172--31--24--27-k8s-coredns--674b8bbfcf--2244g-eth0" Nov 23 23:00:31.007566 containerd[2015]: time="2025-11-23T23:00:31.007454464Z" level=info msg="connecting to shim a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b" address="unix:///run/containerd/s/4a4b5d07f7e544f80e6bab4062ef6ae5e649e6317b774d9ba4214b5b42f0213d" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:00:31.072513 systemd[1]: Started cri-containerd-a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b.scope - libcontainer container a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b. Nov 23 23:00:31.182294 containerd[2015]: time="2025-11-23T23:00:31.181856082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2244g,Uid:c32bb835-766b-4882-947e-95ccedc2df07,Namespace:kube-system,Attempt:0,} returns sandbox id \"a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b\"" Nov 23 23:00:31.195879 containerd[2015]: time="2025-11-23T23:00:31.195264407Z" level=info msg="CreateContainer within sandbox \"a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:00:31.208336 kubelet[3557]: E1123 23:00:31.208150 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:00:31.241179 containerd[2015]: time="2025-11-23T23:00:31.238787358Z" level=info msg="Container 2077edc47306f5c4bcd5b11452fb9f0d67cc27a185ce161278dc6b5fa68cd7e9: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:00:31.263761 containerd[2015]: time="2025-11-23T23:00:31.263648638Z" level=info msg="CreateContainer within sandbox \"a057359c4cd23c99d09fc70f59bdc1aa75264d7b0c14d4f85d7cd2c373aaf51b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2077edc47306f5c4bcd5b11452fb9f0d67cc27a185ce161278dc6b5fa68cd7e9\"" Nov 23 23:00:31.272764 containerd[2015]: time="2025-11-23T23:00:31.272155634Z" level=info msg="StartContainer for \"2077edc47306f5c4bcd5b11452fb9f0d67cc27a185ce161278dc6b5fa68cd7e9\"" Nov 23 23:00:31.276484 containerd[2015]: time="2025-11-23T23:00:31.276335055Z" level=info msg="connecting to shim 2077edc47306f5c4bcd5b11452fb9f0d67cc27a185ce161278dc6b5fa68cd7e9" address="unix:///run/containerd/s/4a4b5d07f7e544f80e6bab4062ef6ae5e649e6317b774d9ba4214b5b42f0213d" protocol=ttrpc version=3 Nov 23 23:00:31.331080 systemd[1]: Started cri-containerd-2077edc47306f5c4bcd5b11452fb9f0d67cc27a185ce161278dc6b5fa68cd7e9.scope - libcontainer container 2077edc47306f5c4bcd5b11452fb9f0d67cc27a185ce161278dc6b5fa68cd7e9. Nov 23 23:00:31.403791 containerd[2015]: time="2025-11-23T23:00:31.403364094Z" level=info msg="StartContainer for \"2077edc47306f5c4bcd5b11452fb9f0d67cc27a185ce161278dc6b5fa68cd7e9\" returns successfully" Nov 23 23:00:31.570485 systemd-networkd[1821]: calid90df9f72d4: Gained IPv6LL Nov 23 23:00:31.675317 containerd[2015]: time="2025-11-23T23:00:31.675173453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-2c5lm,Uid:82cd0774-54f5-4b66-8a2d-bd758439764f,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:00:32.016833 systemd-networkd[1821]: cali799179c2e9d: Link UP Nov 23 23:00:32.019473 systemd-networkd[1821]: cali799179c2e9d: Gained carrier Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.795 [INFO][5760] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0 calico-apiserver-7fcd9fb754- calico-apiserver 82cd0774-54f5-4b66-8a2d-bd758439764f 902 0 2025-11-23 22:59:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fcd9fb754 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-27 calico-apiserver-7fcd9fb754-2c5lm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali799179c2e9d [] [] }} ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-2c5lm" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.796 [INFO][5760] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-2c5lm" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.901 [INFO][5772] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" HandleID="k8s-pod-network.112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Workload="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.902 [INFO][5772] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" HandleID="k8s-pod-network.112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Workload="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000281100), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-27", "pod":"calico-apiserver-7fcd9fb754-2c5lm", "timestamp":"2025-11-23 23:00:31.901908421 +0000 UTC"}, Hostname:"ip-172-31-24-27", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.902 [INFO][5772] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.902 [INFO][5772] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.902 [INFO][5772] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-27' Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.920 [INFO][5772] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" host="ip-172-31-24-27" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.930 [INFO][5772] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-24-27" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.942 [INFO][5772] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.947 [INFO][5772] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.955 [INFO][5772] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ip-172-31-24-27" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.957 [INFO][5772] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" host="ip-172-31-24-27" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.960 [INFO][5772] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1 Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.973 [INFO][5772] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" host="ip-172-31-24-27" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.997 [INFO][5772] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.201/26] block=192.168.122.192/26 handle="k8s-pod-network.112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" host="ip-172-31-24-27" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.998 [INFO][5772] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.201/26] handle="k8s-pod-network.112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" host="ip-172-31-24-27" Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.998 [INFO][5772] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:00:32.062872 containerd[2015]: 2025-11-23 23:00:31.998 [INFO][5772] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.201/26] IPv6=[] ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" HandleID="k8s-pod-network.112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Workload="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" Nov 23 23:00:32.068354 containerd[2015]: 2025-11-23 23:00:32.004 [INFO][5760] cni-plugin/k8s.go 418: Populated endpoint ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-2c5lm" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0", GenerateName:"calico-apiserver-7fcd9fb754-", Namespace:"calico-apiserver", SelfLink:"", UID:"82cd0774-54f5-4b66-8a2d-bd758439764f", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd9fb754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"", Pod:"calico-apiserver-7fcd9fb754-2c5lm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali799179c2e9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:32.068354 containerd[2015]: 2025-11-23 23:00:32.005 [INFO][5760] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.201/32] ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-2c5lm" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" Nov 23 23:00:32.068354 containerd[2015]: 2025-11-23 23:00:32.005 [INFO][5760] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali799179c2e9d ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-2c5lm" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" Nov 23 23:00:32.068354 containerd[2015]: 2025-11-23 23:00:32.020 [INFO][5760] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-2c5lm" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" Nov 23 23:00:32.068354 containerd[2015]: 2025-11-23 23:00:32.021 [INFO][5760] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-2c5lm" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0", GenerateName:"calico-apiserver-7fcd9fb754-", Namespace:"calico-apiserver", SelfLink:"", UID:"82cd0774-54f5-4b66-8a2d-bd758439764f", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 59, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fcd9fb754", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-27", ContainerID:"112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1", Pod:"calico-apiserver-7fcd9fb754-2c5lm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali799179c2e9d", MAC:"0e:82:21:f4:da:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:00:32.068354 containerd[2015]: 2025-11-23 23:00:32.053 [INFO][5760] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" Namespace="calico-apiserver" Pod="calico-apiserver-7fcd9fb754-2c5lm" WorkloadEndpoint="ip--172--31--24--27-k8s-calico--apiserver--7fcd9fb754--2c5lm-eth0" Nov 23 23:00:32.138904 containerd[2015]: time="2025-11-23T23:00:32.138128346Z" level=info msg="connecting to shim 112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1" address="unix:///run/containerd/s/aee9435be33442394586e77672fbe3282bce18f4939489273b5d66107fd6dca0" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:00:32.230741 kubelet[3557]: E1123 23:00:32.230646 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:00:32.234299 systemd[1]: Started cri-containerd-112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1.scope - libcontainer container 112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1. Nov 23 23:00:32.373491 kubelet[3557]: I1123 23:00:32.373203 3557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2244g" podStartSLOduration=65.373178058 podStartE2EDuration="1m5.373178058s" podCreationTimestamp="2025-11-23 22:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:00:32.310766295 +0000 UTC m=+69.033065885" watchObservedRunningTime="2025-11-23 23:00:32.373178058 +0000 UTC m=+69.095477612" Nov 23 23:00:32.464998 systemd-networkd[1821]: cali621172b4bcd: Gained IPv6LL Nov 23 23:00:32.684349 containerd[2015]: time="2025-11-23T23:00:32.683948411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fcd9fb754-2c5lm,Uid:82cd0774-54f5-4b66-8a2d-bd758439764f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"112fc9f4808d8af51a69925749f8dedeb2b093847bb6a3e39e421ab3d509ead1\"" Nov 23 23:00:32.690696 containerd[2015]: time="2025-11-23T23:00:32.688996730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:32.939006 containerd[2015]: time="2025-11-23T23:00:32.938837822Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:32.941234 containerd[2015]: time="2025-11-23T23:00:32.941154212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:32.941562 containerd[2015]: time="2025-11-23T23:00:32.941517165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:32.942013 kubelet[3557]: E1123 23:00:32.941857 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:32.942888 kubelet[3557]: E1123 23:00:32.942261 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:32.942888 kubelet[3557]: E1123 23:00:32.942488 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkbqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcd9fb754-2c5lm_calico-apiserver(82cd0774-54f5-4b66-8a2d-bd758439764f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:32.944094 kubelet[3557]: E1123 23:00:32.944015 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:00:33.230651 kubelet[3557]: E1123 23:00:33.229339 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:00:33.489054 systemd-networkd[1821]: cali799179c2e9d: Gained IPv6LL Nov 23 23:00:34.232124 kubelet[3557]: E1123 23:00:34.232058 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:00:34.676030 containerd[2015]: time="2025-11-23T23:00:34.675961194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:00:34.946045 containerd[2015]: time="2025-11-23T23:00:34.945414867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:34.948373 containerd[2015]: time="2025-11-23T23:00:34.948258308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:00:34.948373 containerd[2015]: time="2025-11-23T23:00:34.948334979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:00:34.948685 kubelet[3557]: E1123 23:00:34.948558 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:34.948685 kubelet[3557]: E1123 23:00:34.948624 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:34.948936 kubelet[3557]: E1123 23:00:34.948845 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-876ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-856dd64f49-b8qsw_calico-system(5cc67e78-541e-4794-9086-b55b57263fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:34.950839 kubelet[3557]: E1123 23:00:34.950585 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:00:35.553037 systemd[1]: Started sshd@11-172.31.24.27:22-139.178.68.195:45062.service - OpenSSH per-connection server daemon (139.178.68.195:45062). Nov 23 23:00:35.679636 containerd[2015]: time="2025-11-23T23:00:35.679501380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:00:35.737489 ntpd[2231]: Listen normally on 12 calicc00464f75f [fe80::ecee:eeff:feee:eeee%11]:123 Nov 23 23:00:35.738324 ntpd[2231]: 23 Nov 23:00:35 ntpd[2231]: Listen normally on 12 calicc00464f75f [fe80::ecee:eeff:feee:eeee%11]:123 Nov 23 23:00:35.738324 ntpd[2231]: 23 Nov 23:00:35 ntpd[2231]: Listen normally on 13 cali3ca3d8cf193 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 23 23:00:35.738324 ntpd[2231]: 23 Nov 23:00:35 ntpd[2231]: Listen normally on 14 calid90df9f72d4 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 23 23:00:35.738324 ntpd[2231]: 23 Nov 23:00:35 ntpd[2231]: Listen normally on 15 cali621172b4bcd [fe80::ecee:eeff:feee:eeee%14]:123 Nov 23 23:00:35.738324 ntpd[2231]: 23 Nov 23:00:35 ntpd[2231]: Listen normally on 16 cali799179c2e9d [fe80::ecee:eeff:feee:eeee%15]:123 Nov 23 23:00:35.738100 ntpd[2231]: Listen normally on 13 cali3ca3d8cf193 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 23 23:00:35.738154 ntpd[2231]: Listen normally on 14 calid90df9f72d4 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 23 23:00:35.738197 ntpd[2231]: Listen normally on 15 cali621172b4bcd [fe80::ecee:eeff:feee:eeee%14]:123 Nov 23 23:00:35.738241 ntpd[2231]: Listen normally on 16 cali799179c2e9d [fe80::ecee:eeff:feee:eeee%15]:123 Nov 23 23:00:35.766157 sshd[5848]: Accepted publickey for core from 139.178.68.195 port 45062 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:35.769766 sshd-session[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:35.779826 systemd-logind[1980]: New session 12 of user core. Nov 23 23:00:35.788092 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:00:35.989049 containerd[2015]: time="2025-11-23T23:00:35.988879842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:35.992480 containerd[2015]: time="2025-11-23T23:00:35.992277360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:00:35.992480 containerd[2015]: time="2025-11-23T23:00:35.992366037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:00:35.993880 kubelet[3557]: E1123 23:00:35.993775 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:00:35.994490 kubelet[3557]: E1123 23:00:35.993890 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:00:35.994490 kubelet[3557]: E1123 23:00:35.994235 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cd8ad3e85ed44ff798d8fe6459e599d3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4fdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585dccbd85-tdmw2_calico-system(3ebffe5d-c943-4ecd-a570-27b8df3681f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:35.999656 containerd[2015]: time="2025-11-23T23:00:35.998853407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:00:36.110740 sshd[5851]: Connection closed by 139.178.68.195 port 45062 Nov 23 23:00:36.111645 sshd-session[5848]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:36.120487 systemd[1]: sshd@11-172.31.24.27:22-139.178.68.195:45062.service: Deactivated successfully. Nov 23 23:00:36.125262 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:00:36.127606 systemd-logind[1980]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:00:36.130834 systemd-logind[1980]: Removed session 12. Nov 23 23:00:36.148019 systemd[1]: Started sshd@12-172.31.24.27:22-139.178.68.195:45074.service - OpenSSH per-connection server daemon (139.178.68.195:45074). Nov 23 23:00:36.289283 containerd[2015]: time="2025-11-23T23:00:36.289235975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:36.292956 containerd[2015]: time="2025-11-23T23:00:36.292774540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:00:36.292956 containerd[2015]: time="2025-11-23T23:00:36.292796487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:00:36.293282 kubelet[3557]: E1123 23:00:36.293212 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:00:36.293282 kubelet[3557]: E1123 23:00:36.293272 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:00:36.293890 kubelet[3557]: E1123 23:00:36.293433 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4fdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585dccbd85-tdmw2_calico-system(3ebffe5d-c943-4ecd-a570-27b8df3681f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:36.295110 kubelet[3557]: E1123 23:00:36.294993 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:00:36.343931 sshd[5864]: Accepted publickey for core from 139.178.68.195 port 45074 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:36.346673 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:36.359230 systemd-logind[1980]: New session 13 of user core. Nov 23 23:00:36.367059 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:00:36.676095 containerd[2015]: time="2025-11-23T23:00:36.675819177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:00:36.815097 sshd[5867]: Connection closed by 139.178.68.195 port 45074 Nov 23 23:00:36.819158 sshd-session[5864]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:36.830163 systemd[1]: sshd@12-172.31.24.27:22-139.178.68.195:45074.service: Deactivated successfully. Nov 23 23:00:36.838710 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:00:36.869025 systemd-logind[1980]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:00:36.872951 systemd[1]: Started sshd@13-172.31.24.27:22-139.178.68.195:45088.service - OpenSSH per-connection server daemon (139.178.68.195:45088). Nov 23 23:00:36.876759 systemd-logind[1980]: Removed session 13. Nov 23 23:00:36.936756 containerd[2015]: time="2025-11-23T23:00:36.936340302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:36.940003 containerd[2015]: time="2025-11-23T23:00:36.939821406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:00:36.940003 containerd[2015]: time="2025-11-23T23:00:36.939952860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:36.941620 kubelet[3557]: E1123 23:00:36.940427 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:00:36.941620 kubelet[3557]: E1123 23:00:36.940490 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:00:36.941620 kubelet[3557]: E1123 23:00:36.940683 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffwh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7qjg8_calico-system(58da0435-c510-4733-869f-85a4fe15eaf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:36.942878 kubelet[3557]: E1123 23:00:36.942434 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:00:37.105695 sshd[5877]: Accepted publickey for core from 139.178.68.195 port 45088 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:37.110981 sshd-session[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:37.124188 systemd-logind[1980]: New session 14 of user core. Nov 23 23:00:37.133283 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:00:37.424501 sshd[5880]: Connection closed by 139.178.68.195 port 45088 Nov 23 23:00:37.426118 sshd-session[5877]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:37.434214 systemd[1]: sshd@13-172.31.24.27:22-139.178.68.195:45088.service: Deactivated successfully. Nov 23 23:00:37.440875 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:00:37.443052 systemd-logind[1980]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:00:37.446982 systemd-logind[1980]: Removed session 14. Nov 23 23:00:42.473308 systemd[1]: Started sshd@14-172.31.24.27:22-139.178.68.195:37056.service - OpenSSH per-connection server daemon (139.178.68.195:37056). Nov 23 23:00:42.675485 containerd[2015]: time="2025-11-23T23:00:42.675108296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:42.685430 sshd[5900]: Accepted publickey for core from 139.178.68.195 port 37056 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:42.690161 sshd-session[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:42.700825 systemd-logind[1980]: New session 15 of user core. Nov 23 23:00:42.708011 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:00:42.940354 containerd[2015]: time="2025-11-23T23:00:42.940156440Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:42.942933 containerd[2015]: time="2025-11-23T23:00:42.942787147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:42.942933 containerd[2015]: time="2025-11-23T23:00:42.942841690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:42.943422 kubelet[3557]: E1123 23:00:42.943352 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:42.944369 kubelet[3557]: E1123 23:00:42.944038 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:42.944369 kubelet[3557]: E1123 23:00:42.944268 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqcn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6574bf4f5d-qz2dt_calico-apiserver(1265050f-2f3c-4c9a-a19e-43d1823e072d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:42.945644 kubelet[3557]: E1123 23:00:42.945512 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:00:42.960765 sshd[5905]: Connection closed by 139.178.68.195 port 37056 Nov 23 23:00:42.961373 sshd-session[5900]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:42.969034 systemd[1]: sshd@14-172.31.24.27:22-139.178.68.195:37056.service: Deactivated successfully. Nov 23 23:00:42.975762 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:00:42.978346 systemd-logind[1980]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:00:42.981957 systemd-logind[1980]: Removed session 15. Nov 23 23:00:44.675010 containerd[2015]: time="2025-11-23T23:00:44.674901519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:00:44.929199 containerd[2015]: time="2025-11-23T23:00:44.928921119Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:44.931492 containerd[2015]: time="2025-11-23T23:00:44.931363608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:00:44.931492 containerd[2015]: time="2025-11-23T23:00:44.931429389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:00:44.932052 kubelet[3557]: E1123 23:00:44.931966 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:00:44.932583 kubelet[3557]: E1123 23:00:44.932053 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:00:44.932973 kubelet[3557]: E1123 23:00:44.932851 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:44.937210 containerd[2015]: time="2025-11-23T23:00:44.937117543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:00:45.220843 containerd[2015]: time="2025-11-23T23:00:45.219713657Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:45.223479 containerd[2015]: time="2025-11-23T23:00:45.223288839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:00:45.223479 containerd[2015]: time="2025-11-23T23:00:45.223374550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:00:45.223879 kubelet[3557]: E1123 23:00:45.223687 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:00:45.223879 kubelet[3557]: E1123 23:00:45.223792 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:00:45.225123 kubelet[3557]: E1123 23:00:45.225026 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:45.227051 kubelet[3557]: E1123 23:00:45.226962 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:00:45.681121 kubelet[3557]: E1123 23:00:45.681058 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:00:45.682810 containerd[2015]: time="2025-11-23T23:00:45.682333210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:45.950890 containerd[2015]: time="2025-11-23T23:00:45.950692873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:45.953106 containerd[2015]: time="2025-11-23T23:00:45.953038497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:45.953301 containerd[2015]: time="2025-11-23T23:00:45.953158953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:45.953457 kubelet[3557]: E1123 23:00:45.953386 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:45.954155 kubelet[3557]: E1123 23:00:45.953456 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:45.954155 kubelet[3557]: E1123 23:00:45.953665 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkbqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcd9fb754-2c5lm_calico-apiserver(82cd0774-54f5-4b66-8a2d-bd758439764f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:45.955224 kubelet[3557]: E1123 23:00:45.954802 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:00:46.674221 containerd[2015]: time="2025-11-23T23:00:46.674153319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:00:46.913137 containerd[2015]: time="2025-11-23T23:00:46.913025585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:46.915927 containerd[2015]: time="2025-11-23T23:00:46.915830955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:00:46.916104 containerd[2015]: time="2025-11-23T23:00:46.915846887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:00:46.916457 kubelet[3557]: E1123 23:00:46.916345 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:46.916593 kubelet[3557]: E1123 23:00:46.916466 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:00:46.917762 kubelet[3557]: E1123 23:00:46.916826 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p89f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcd9fb754-gfvhg_calico-apiserver(a3a85933-215d-434f-beb8-3b039c057228): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:46.918268 kubelet[3557]: E1123 23:00:46.918220 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:00:48.007333 systemd[1]: Started sshd@15-172.31.24.27:22-139.178.68.195:37066.service - OpenSSH per-connection server daemon (139.178.68.195:37066). Nov 23 23:00:48.209717 sshd[5920]: Accepted publickey for core from 139.178.68.195 port 37066 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:48.213292 sshd-session[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:48.222794 systemd-logind[1980]: New session 16 of user core. Nov 23 23:00:48.231025 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:00:48.491572 sshd[5946]: Connection closed by 139.178.68.195 port 37066 Nov 23 23:00:48.492545 sshd-session[5920]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:48.500689 systemd-logind[1980]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:00:48.504291 systemd[1]: sshd@15-172.31.24.27:22-139.178.68.195:37066.service: Deactivated successfully. Nov 23 23:00:48.512453 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:00:48.521155 systemd-logind[1980]: Removed session 16. Nov 23 23:00:50.677394 kubelet[3557]: E1123 23:00:50.677266 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:00:51.678582 kubelet[3557]: E1123 23:00:51.678493 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:00:53.533764 systemd[1]: Started sshd@16-172.31.24.27:22-139.178.68.195:45678.service - OpenSSH per-connection server daemon (139.178.68.195:45678). Nov 23 23:00:53.738150 sshd[5963]: Accepted publickey for core from 139.178.68.195 port 45678 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:53.740515 sshd-session[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:53.752699 systemd-logind[1980]: New session 17 of user core. Nov 23 23:00:53.758039 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:00:54.022938 sshd[5966]: Connection closed by 139.178.68.195 port 45678 Nov 23 23:00:54.024838 sshd-session[5963]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:54.032689 systemd[1]: sshd@16-172.31.24.27:22-139.178.68.195:45678.service: Deactivated successfully. Nov 23 23:00:54.036888 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:00:54.039114 systemd-logind[1980]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:00:54.043017 systemd-logind[1980]: Removed session 17. Nov 23 23:00:55.674775 kubelet[3557]: E1123 23:00:55.673255 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:00:56.682862 containerd[2015]: time="2025-11-23T23:00:56.682790399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:00:56.956271 containerd[2015]: time="2025-11-23T23:00:56.956109812Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:00:56.958482 containerd[2015]: time="2025-11-23T23:00:56.958386906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:00:56.958655 containerd[2015]: time="2025-11-23T23:00:56.958445171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:00:56.958959 kubelet[3557]: E1123 23:00:56.958907 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:56.960994 kubelet[3557]: E1123 23:00:56.960578 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:00:56.960994 kubelet[3557]: E1123 23:00:56.960869 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-876ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-856dd64f49-b8qsw_calico-system(5cc67e78-541e-4794-9086-b55b57263fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:00:56.962568 kubelet[3557]: E1123 23:00:56.962473 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:00:58.676946 kubelet[3557]: E1123 23:00:58.676807 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:00:59.064343 systemd[1]: Started sshd@17-172.31.24.27:22-139.178.68.195:45684.service - OpenSSH per-connection server daemon (139.178.68.195:45684). Nov 23 23:00:59.268570 sshd[5981]: Accepted publickey for core from 139.178.68.195 port 45684 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:59.271093 sshd-session[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:59.279882 systemd-logind[1980]: New session 18 of user core. Nov 23 23:00:59.290011 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:00:59.574594 sshd[5984]: Connection closed by 139.178.68.195 port 45684 Nov 23 23:00:59.575453 sshd-session[5981]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:59.582497 systemd[1]: sshd@17-172.31.24.27:22-139.178.68.195:45684.service: Deactivated successfully. Nov 23 23:00:59.587302 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:00:59.589154 systemd-logind[1980]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:00:59.593284 systemd-logind[1980]: Removed session 18. Nov 23 23:00:59.613841 systemd[1]: Started sshd@18-172.31.24.27:22-139.178.68.195:45690.service - OpenSSH per-connection server daemon (139.178.68.195:45690). Nov 23 23:00:59.675700 kubelet[3557]: E1123 23:00:59.675611 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:00:59.840262 sshd[5995]: Accepted publickey for core from 139.178.68.195 port 45690 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:00:59.843497 sshd-session[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:59.854757 systemd-logind[1980]: New session 19 of user core. Nov 23 23:00:59.864176 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:01:00.361402 sshd[5998]: Connection closed by 139.178.68.195 port 45690 Nov 23 23:01:00.362707 sshd-session[5995]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:00.373395 systemd[1]: sshd@18-172.31.24.27:22-139.178.68.195:45690.service: Deactivated successfully. Nov 23 23:01:00.377748 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:01:00.379893 systemd-logind[1980]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:01:00.394691 systemd-logind[1980]: Removed session 19. Nov 23 23:01:00.399199 systemd[1]: Started sshd@19-172.31.24.27:22-139.178.68.195:49646.service - OpenSSH per-connection server daemon (139.178.68.195:49646). Nov 23 23:01:00.601444 sshd[6009]: Accepted publickey for core from 139.178.68.195 port 49646 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:01:00.604062 sshd-session[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:01:00.612893 systemd-logind[1980]: New session 20 of user core. Nov 23 23:01:00.627004 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 23:01:00.675209 kubelet[3557]: E1123 23:01:00.675037 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:01:01.811432 sshd[6012]: Connection closed by 139.178.68.195 port 49646 Nov 23 23:01:01.812082 sshd-session[6009]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:01.824233 systemd[1]: sshd@19-172.31.24.27:22-139.178.68.195:49646.service: Deactivated successfully. Nov 23 23:01:01.830160 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 23:01:01.834088 systemd-logind[1980]: Session 20 logged out. Waiting for processes to exit. Nov 23 23:01:01.866480 systemd[1]: Started sshd@20-172.31.24.27:22-139.178.68.195:49658.service - OpenSSH per-connection server daemon (139.178.68.195:49658). Nov 23 23:01:01.871345 systemd-logind[1980]: Removed session 20. Nov 23 23:01:02.106427 sshd[6032]: Accepted publickey for core from 139.178.68.195 port 49658 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:01:02.111544 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:01:02.127026 systemd-logind[1980]: New session 21 of user core. Nov 23 23:01:02.137387 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 23 23:01:02.740800 sshd[6043]: Connection closed by 139.178.68.195 port 49658 Nov 23 23:01:02.741691 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:02.752696 systemd[1]: sshd@20-172.31.24.27:22-139.178.68.195:49658.service: Deactivated successfully. Nov 23 23:01:02.759556 systemd[1]: session-21.scope: Deactivated successfully. Nov 23 23:01:02.762149 systemd-logind[1980]: Session 21 logged out. Waiting for processes to exit. Nov 23 23:01:02.780482 systemd[1]: Started sshd@21-172.31.24.27:22-139.178.68.195:49674.service - OpenSSH per-connection server daemon (139.178.68.195:49674). Nov 23 23:01:02.783560 systemd-logind[1980]: Removed session 21. Nov 23 23:01:02.984575 sshd[6053]: Accepted publickey for core from 139.178.68.195 port 49674 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:01:02.986997 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:01:02.996261 systemd-logind[1980]: New session 22 of user core. Nov 23 23:01:03.007004 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 23 23:01:03.248340 sshd[6056]: Connection closed by 139.178.68.195 port 49674 Nov 23 23:01:03.249315 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:03.256414 systemd[1]: sshd@21-172.31.24.27:22-139.178.68.195:49674.service: Deactivated successfully. Nov 23 23:01:03.262510 systemd[1]: session-22.scope: Deactivated successfully. Nov 23 23:01:03.265461 systemd-logind[1980]: Session 22 logged out. Waiting for processes to exit. Nov 23 23:01:03.269445 systemd-logind[1980]: Removed session 22. Nov 23 23:01:04.674833 containerd[2015]: time="2025-11-23T23:01:04.674583214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:01:04.940090 containerd[2015]: time="2025-11-23T23:01:04.939764156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:04.942082 containerd[2015]: time="2025-11-23T23:01:04.942002002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:01:04.942277 containerd[2015]: time="2025-11-23T23:01:04.942146878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:01:04.942360 kubelet[3557]: E1123 23:01:04.942318 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:01:04.942932 kubelet[3557]: E1123 23:01:04.942382 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:01:04.942932 kubelet[3557]: E1123 23:01:04.942672 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffwh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7qjg8_calico-system(58da0435-c510-4733-869f-85a4fe15eaf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:04.944656 kubelet[3557]: E1123 23:01:04.943924 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:01:04.944972 containerd[2015]: time="2025-11-23T23:01:04.943702424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:01:05.214533 containerd[2015]: time="2025-11-23T23:01:05.213828841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:05.216234 containerd[2015]: time="2025-11-23T23:01:05.216141461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:01:05.216471 containerd[2015]: time="2025-11-23T23:01:05.216154248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:01:05.216734 kubelet[3557]: E1123 23:01:05.216661 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:01:05.216844 kubelet[3557]: E1123 23:01:05.216764 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:01:05.217668 kubelet[3557]: E1123 23:01:05.216937 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cd8ad3e85ed44ff798d8fe6459e599d3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4fdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585dccbd85-tdmw2_calico-system(3ebffe5d-c943-4ecd-a570-27b8df3681f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:05.221619 containerd[2015]: time="2025-11-23T23:01:05.221527581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:01:05.504282 containerd[2015]: time="2025-11-23T23:01:05.504088565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:05.506412 containerd[2015]: time="2025-11-23T23:01:05.506262119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:01:05.506715 containerd[2015]: time="2025-11-23T23:01:05.506683013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:01:05.507917 kubelet[3557]: E1123 23:01:05.507151 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:01:05.507917 kubelet[3557]: E1123 23:01:05.507215 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:01:05.507917 kubelet[3557]: E1123 23:01:05.507378 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4fdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585dccbd85-tdmw2_calico-system(3ebffe5d-c943-4ecd-a570-27b8df3681f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:05.509057 kubelet[3557]: E1123 23:01:05.508978 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:01:07.676481 containerd[2015]: time="2025-11-23T23:01:07.676306138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:01:07.946956 containerd[2015]: time="2025-11-23T23:01:07.946646947Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:07.949018 containerd[2015]: time="2025-11-23T23:01:07.948836121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:01:07.949018 containerd[2015]: time="2025-11-23T23:01:07.948961499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:01:07.949328 kubelet[3557]: E1123 23:01:07.949228 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:07.949888 kubelet[3557]: E1123 23:01:07.949327 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:07.949888 kubelet[3557]: E1123 23:01:07.949523 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqcn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6574bf4f5d-qz2dt_calico-apiserver(1265050f-2f3c-4c9a-a19e-43d1823e072d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:07.951366 kubelet[3557]: E1123 23:01:07.950930 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:01:08.299851 systemd[1]: Started sshd@22-172.31.24.27:22-139.178.68.195:49678.service - OpenSSH per-connection server daemon (139.178.68.195:49678). Nov 23 23:01:08.507614 sshd[6071]: Accepted publickey for core from 139.178.68.195 port 49678 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:01:08.511509 sshd-session[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:01:08.521506 systemd-logind[1980]: New session 23 of user core. Nov 23 23:01:08.527036 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 23 23:01:08.767850 sshd[6074]: Connection closed by 139.178.68.195 port 49678 Nov 23 23:01:08.769019 sshd-session[6071]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:08.777922 systemd[1]: sshd@22-172.31.24.27:22-139.178.68.195:49678.service: Deactivated successfully. Nov 23 23:01:08.782281 systemd[1]: session-23.scope: Deactivated successfully. Nov 23 23:01:08.787823 systemd-logind[1980]: Session 23 logged out. Waiting for processes to exit. Nov 23 23:01:08.790025 systemd-logind[1980]: Removed session 23. Nov 23 23:01:09.675660 containerd[2015]: time="2025-11-23T23:01:09.675512339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:01:10.008380 containerd[2015]: time="2025-11-23T23:01:10.008307568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:10.010508 containerd[2015]: time="2025-11-23T23:01:10.010433939Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:01:10.010627 containerd[2015]: time="2025-11-23T23:01:10.010550253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:01:10.010962 kubelet[3557]: E1123 23:01:10.010885 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:01:10.012998 kubelet[3557]: E1123 23:01:10.011235 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:01:10.012998 kubelet[3557]: E1123 23:01:10.011845 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:10.016392 containerd[2015]: time="2025-11-23T23:01:10.016329221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:01:10.278900 containerd[2015]: time="2025-11-23T23:01:10.278564587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:10.281060 containerd[2015]: time="2025-11-23T23:01:10.280907330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:01:10.281060 containerd[2015]: time="2025-11-23T23:01:10.280976832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:01:10.281351 kubelet[3557]: E1123 23:01:10.281280 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:01:10.281439 kubelet[3557]: E1123 23:01:10.281351 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:01:10.281613 kubelet[3557]: E1123 23:01:10.281526 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:10.282999 kubelet[3557]: E1123 23:01:10.282930 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:01:12.678286 containerd[2015]: time="2025-11-23T23:01:12.678155439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:01:12.678899 kubelet[3557]: E1123 23:01:12.678702 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:01:12.981678 containerd[2015]: time="2025-11-23T23:01:12.981517047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:12.983691 containerd[2015]: time="2025-11-23T23:01:12.983624100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:01:12.983854 containerd[2015]: time="2025-11-23T23:01:12.983768472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:01:12.984143 kubelet[3557]: E1123 23:01:12.984093 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:12.984287 kubelet[3557]: E1123 23:01:12.984259 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:12.984659 kubelet[3557]: E1123 23:01:12.984580 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkbqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcd9fb754-2c5lm_calico-apiserver(82cd0774-54f5-4b66-8a2d-bd758439764f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:12.986412 kubelet[3557]: E1123 23:01:12.986297 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:01:13.676580 containerd[2015]: time="2025-11-23T23:01:13.676452810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:01:13.813765 systemd[1]: Started sshd@23-172.31.24.27:22-139.178.68.195:46634.service - OpenSSH per-connection server daemon (139.178.68.195:46634). Nov 23 23:01:13.911267 containerd[2015]: time="2025-11-23T23:01:13.911181805Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:13.913440 containerd[2015]: time="2025-11-23T23:01:13.913366609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:01:13.913550 containerd[2015]: time="2025-11-23T23:01:13.913496514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:01:13.913913 kubelet[3557]: E1123 23:01:13.913845 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:13.914402 kubelet[3557]: E1123 23:01:13.913936 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:13.914402 kubelet[3557]: E1123 23:01:13.914321 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p89f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcd9fb754-gfvhg_calico-apiserver(a3a85933-215d-434f-beb8-3b039c057228): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:13.916295 kubelet[3557]: E1123 23:01:13.916222 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:01:14.032135 sshd[6091]: Accepted publickey for core from 139.178.68.195 port 46634 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:01:14.034851 sshd-session[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:01:14.044165 systemd-logind[1980]: New session 24 of user core. Nov 23 23:01:14.054016 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 23 23:01:14.299831 sshd[6094]: Connection closed by 139.178.68.195 port 46634 Nov 23 23:01:14.300663 sshd-session[6091]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:14.308134 systemd[1]: sshd@23-172.31.24.27:22-139.178.68.195:46634.service: Deactivated successfully. Nov 23 23:01:14.313678 systemd[1]: session-24.scope: Deactivated successfully. Nov 23 23:01:14.317618 systemd-logind[1980]: Session 24 logged out. Waiting for processes to exit. Nov 23 23:01:14.320803 systemd-logind[1980]: Removed session 24. Nov 23 23:01:17.675703 kubelet[3557]: E1123 23:01:17.674627 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:01:19.344409 systemd[1]: Started sshd@24-172.31.24.27:22-139.178.68.195:46638.service - OpenSSH per-connection server daemon (139.178.68.195:46638). Nov 23 23:01:19.544617 sshd[6132]: Accepted publickey for core from 139.178.68.195 port 46638 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:01:19.547423 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:01:19.555712 systemd-logind[1980]: New session 25 of user core. Nov 23 23:01:19.565010 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 23 23:01:19.676749 kubelet[3557]: E1123 23:01:19.676198 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:01:19.836711 sshd[6135]: Connection closed by 139.178.68.195 port 46638 Nov 23 23:01:19.835601 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:19.842568 systemd[1]: sshd@24-172.31.24.27:22-139.178.68.195:46638.service: Deactivated successfully. Nov 23 23:01:19.843815 systemd-logind[1980]: Session 25 logged out. Waiting for processes to exit. Nov 23 23:01:19.848936 systemd[1]: session-25.scope: Deactivated successfully. Nov 23 23:01:19.854552 systemd-logind[1980]: Removed session 25. Nov 23 23:01:21.676035 kubelet[3557]: E1123 23:01:21.675954 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:01:21.680865 kubelet[3557]: E1123 23:01:21.680122 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:01:24.879336 systemd[1]: Started sshd@25-172.31.24.27:22-139.178.68.195:60104.service - OpenSSH per-connection server daemon (139.178.68.195:60104). Nov 23 23:01:25.104055 sshd[6151]: Accepted publickey for core from 139.178.68.195 port 60104 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:01:25.108971 sshd-session[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:01:25.121515 systemd-logind[1980]: New session 26 of user core. Nov 23 23:01:25.130182 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 23 23:01:25.436974 sshd[6154]: Connection closed by 139.178.68.195 port 60104 Nov 23 23:01:25.438311 sshd-session[6151]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:25.448822 systemd-logind[1980]: Session 26 logged out. Waiting for processes to exit. Nov 23 23:01:25.449685 systemd[1]: sshd@25-172.31.24.27:22-139.178.68.195:60104.service: Deactivated successfully. Nov 23 23:01:25.455413 systemd[1]: session-26.scope: Deactivated successfully. Nov 23 23:01:25.461758 systemd-logind[1980]: Removed session 26. Nov 23 23:01:25.677033 kubelet[3557]: E1123 23:01:25.676967 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:01:26.675771 kubelet[3557]: E1123 23:01:26.673821 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:01:27.683060 kubelet[3557]: E1123 23:01:27.682969 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:01:29.678121 kubelet[3557]: E1123 23:01:29.677973 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:01:30.481183 systemd[1]: Started sshd@26-172.31.24.27:22-139.178.68.195:52946.service - OpenSSH per-connection server daemon (139.178.68.195:52946). Nov 23 23:01:30.708163 sshd[6168]: Accepted publickey for core from 139.178.68.195 port 52946 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:01:30.710095 sshd-session[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:01:30.725020 systemd-logind[1980]: New session 27 of user core. Nov 23 23:01:30.732089 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 23 23:01:31.074852 sshd[6171]: Connection closed by 139.178.68.195 port 52946 Nov 23 23:01:31.075913 sshd-session[6168]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:31.087494 systemd-logind[1980]: Session 27 logged out. Waiting for processes to exit. Nov 23 23:01:31.090876 systemd[1]: sshd@26-172.31.24.27:22-139.178.68.195:52946.service: Deactivated successfully. Nov 23 23:01:31.096481 systemd[1]: session-27.scope: Deactivated successfully. Nov 23 23:01:31.101603 systemd-logind[1980]: Removed session 27. Nov 23 23:01:32.676043 kubelet[3557]: E1123 23:01:32.675817 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:01:33.683575 kubelet[3557]: E1123 23:01:33.683445 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:01:34.678325 kubelet[3557]: E1123 23:01:34.678154 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:01:36.116861 systemd[1]: Started sshd@27-172.31.24.27:22-139.178.68.195:52958.service - OpenSSH per-connection server daemon (139.178.68.195:52958). Nov 23 23:01:36.329311 sshd[6185]: Accepted publickey for core from 139.178.68.195 port 52958 ssh2: RSA SHA256:U+pqkkjujCqSWzNqlLC5FwY85x7/HjFaUhdBkqR7ZEA Nov 23 23:01:36.333579 sshd-session[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:01:36.349292 systemd-logind[1980]: New session 28 of user core. Nov 23 23:01:36.365330 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 23 23:01:36.667377 sshd[6188]: Connection closed by 139.178.68.195 port 52958 Nov 23 23:01:36.669906 sshd-session[6185]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:36.679361 kubelet[3557]: E1123 23:01:36.678982 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:01:36.683603 systemd[1]: sshd@27-172.31.24.27:22-139.178.68.195:52958.service: Deactivated successfully. Nov 23 23:01:36.684590 systemd-logind[1980]: Session 28 logged out. Waiting for processes to exit. Nov 23 23:01:36.693251 systemd[1]: session-28.scope: Deactivated successfully. Nov 23 23:01:36.703799 systemd-logind[1980]: Removed session 28. Nov 23 23:01:39.675375 kubelet[3557]: E1123 23:01:39.674694 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:01:42.675895 kubelet[3557]: E1123 23:01:42.675826 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:01:43.684040 kubelet[3557]: E1123 23:01:43.683707 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:01:44.675556 kubelet[3557]: E1123 23:01:44.675482 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:01:47.674339 containerd[2015]: time="2025-11-23T23:01:47.673930211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:01:47.925780 containerd[2015]: time="2025-11-23T23:01:47.924817249Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:47.929954 containerd[2015]: time="2025-11-23T23:01:47.929810413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:01:47.930295 containerd[2015]: time="2025-11-23T23:01:47.929854573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:01:47.930505 kubelet[3557]: E1123 23:01:47.930428 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:01:47.931641 kubelet[3557]: E1123 23:01:47.930500 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:01:47.931641 kubelet[3557]: E1123 23:01:47.931395 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffwh2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7qjg8_calico-system(58da0435-c510-4733-869f-85a4fe15eaf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:47.933142 kubelet[3557]: E1123 23:01:47.932754 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:01:48.677694 kubelet[3557]: E1123 23:01:48.677624 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:01:51.675755 containerd[2015]: time="2025-11-23T23:01:51.675011091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:01:51.919216 containerd[2015]: time="2025-11-23T23:01:51.919021948Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:51.922052 containerd[2015]: time="2025-11-23T23:01:51.921969652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:01:51.922378 containerd[2015]: time="2025-11-23T23:01:51.922115152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:01:51.922849 kubelet[3557]: E1123 23:01:51.922627 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:01:51.922849 kubelet[3557]: E1123 23:01:51.922796 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:01:51.924394 kubelet[3557]: E1123 23:01:51.924132 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-876ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-856dd64f49-b8qsw_calico-system(5cc67e78-541e-4794-9086-b55b57263fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:51.926510 kubelet[3557]: E1123 23:01:51.925832 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:01:52.676407 kubelet[3557]: E1123 23:01:52.674190 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:01:55.676051 containerd[2015]: time="2025-11-23T23:01:55.675544087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:01:55.932683 containerd[2015]: time="2025-11-23T23:01:55.932394404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:55.934772 containerd[2015]: time="2025-11-23T23:01:55.934599236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:01:55.934772 containerd[2015]: time="2025-11-23T23:01:55.934659296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:01:55.935396 kubelet[3557]: E1123 23:01:55.935325 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:55.936608 kubelet[3557]: E1123 23:01:55.935972 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:55.936608 kubelet[3557]: E1123 23:01:55.936474 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkbqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcd9fb754-2c5lm_calico-apiserver(82cd0774-54f5-4b66-8a2d-bd758439764f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:55.937715 kubelet[3557]: E1123 23:01:55.937647 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:01:56.676777 containerd[2015]: time="2025-11-23T23:01:56.676352948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:01:56.907096 containerd[2015]: time="2025-11-23T23:01:56.907008405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:56.909587 containerd[2015]: time="2025-11-23T23:01:56.909427701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:01:56.910809 containerd[2015]: time="2025-11-23T23:01:56.909564477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:01:56.911291 kubelet[3557]: E1123 23:01:56.911193 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:01:56.911291 kubelet[3557]: E1123 23:01:56.911269 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:01:56.911646 kubelet[3557]: E1123 23:01:56.911557 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cd8ad3e85ed44ff798d8fe6459e599d3,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4fdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585dccbd85-tdmw2_calico-system(3ebffe5d-c943-4ecd-a570-27b8df3681f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:56.912046 containerd[2015]: time="2025-11-23T23:01:56.911974245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:01:57.174630 containerd[2015]: time="2025-11-23T23:01:57.174395622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:57.176869 containerd[2015]: time="2025-11-23T23:01:57.176689974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:01:57.176869 containerd[2015]: time="2025-11-23T23:01:57.176778318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:01:57.177755 kubelet[3557]: E1123 23:01:57.177242 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:57.177755 kubelet[3557]: E1123 23:01:57.177310 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:01:57.177755 kubelet[3557]: E1123 23:01:57.177646 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqcn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6574bf4f5d-qz2dt_calico-apiserver(1265050f-2f3c-4c9a-a19e-43d1823e072d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:57.180195 containerd[2015]: time="2025-11-23T23:01:57.179402646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:01:57.180582 kubelet[3557]: E1123 23:01:57.180029 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:01:57.468850 containerd[2015]: time="2025-11-23T23:01:57.467981780Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:01:57.470423 containerd[2015]: time="2025-11-23T23:01:57.470244932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:01:57.470423 containerd[2015]: time="2025-11-23T23:01:57.470377172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:01:57.472784 kubelet[3557]: E1123 23:01:57.471956 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:01:57.472784 kubelet[3557]: E1123 23:01:57.472026 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:01:57.472784 kubelet[3557]: E1123 23:01:57.472195 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4fdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585dccbd85-tdmw2_calico-system(3ebffe5d-c943-4ecd-a570-27b8df3681f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:01:57.473970 kubelet[3557]: E1123 23:01:57.473857 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:02:00.676130 kubelet[3557]: E1123 23:02:00.675968 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:02:00.677842 containerd[2015]: time="2025-11-23T23:02:00.677253612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:02:00.909978 containerd[2015]: time="2025-11-23T23:02:00.908820829Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:00.912022 containerd[2015]: time="2025-11-23T23:02:00.911948521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:02:00.912405 containerd[2015]: time="2025-11-23T23:02:00.912187981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:02:00.912899 kubelet[3557]: E1123 23:02:00.912763 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:00.912899 kubelet[3557]: E1123 23:02:00.912849 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:00.914197 kubelet[3557]: E1123 23:02:00.914083 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:00.918155 containerd[2015]: time="2025-11-23T23:02:00.917481073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:02:01.162677 containerd[2015]: time="2025-11-23T23:02:01.162557446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:01.166092 containerd[2015]: time="2025-11-23T23:02:01.165960142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:02:01.166266 containerd[2015]: time="2025-11-23T23:02:01.166022458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:02:01.166685 kubelet[3557]: E1123 23:02:01.166528 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:01.166848 kubelet[3557]: E1123 23:02:01.166718 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:01.166848 kubelet[3557]: E1123 23:02:01.167193 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-xflhj_calico-system(39270bf4-b6a6-4d62-8a14-a5e6fd018861): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:01.168599 kubelet[3557]: E1123 23:02:01.168500 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:02:03.690534 containerd[2015]: time="2025-11-23T23:02:03.690459315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:03.930551 containerd[2015]: time="2025-11-23T23:02:03.930481108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:03.932846 containerd[2015]: time="2025-11-23T23:02:03.932719540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:03.933086 containerd[2015]: time="2025-11-23T23:02:03.933040948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:03.933773 kubelet[3557]: E1123 23:02:03.933445 3557 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:03.933773 kubelet[3557]: E1123 23:02:03.933514 3557 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:03.934484 kubelet[3557]: E1123 23:02:03.933709 3557 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p89f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fcd9fb754-gfvhg_calico-apiserver(a3a85933-215d-434f-beb8-3b039c057228): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:03.935714 kubelet[3557]: E1123 23:02:03.935641 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:02:06.675863 kubelet[3557]: E1123 23:02:06.675521 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:02:07.684683 kubelet[3557]: E1123 23:02:07.684311 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:02:07.688455 kubelet[3557]: E1123 23:02:07.688189 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:02:08.675437 kubelet[3557]: E1123 23:02:08.674779 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:02:13.675911 kubelet[3557]: E1123 23:02:13.675516 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:02:15.674840 kubelet[3557]: E1123 23:02:15.674636 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:02:16.674041 kubelet[3557]: E1123 23:02:16.673946 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:02:17.674360 kubelet[3557]: E1123 23:02:17.674179 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:02:18.673629 kubelet[3557]: E1123 23:02:18.673551 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:02:21.674057 kubelet[3557]: E1123 23:02:21.673942 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:02:23.120477 systemd[1]: cri-containerd-74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a.scope: Deactivated successfully. Nov 23 23:02:23.122059 systemd[1]: cri-containerd-74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a.scope: Consumed 5.400s CPU time, 58.8M memory peak. Nov 23 23:02:23.130409 containerd[2015]: time="2025-11-23T23:02:23.130271827Z" level=info msg="received container exit event container_id:\"74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a\" id:\"74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a\" pid:3212 exit_status:1 exited_at:{seconds:1763938943 nanos:129303835}" Nov 23 23:02:23.178447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a-rootfs.mount: Deactivated successfully. Nov 23 23:02:23.648138 kubelet[3557]: I1123 23:02:23.648075 3557 scope.go:117] "RemoveContainer" containerID="74ca3af071d79060e6595786e9ad3404d4f4a95f54d3ca5bbab5430c12e8079a" Nov 23 23:02:23.653054 containerd[2015]: time="2025-11-23T23:02:23.652414858Z" level=info msg="CreateContainer within sandbox \"11bdb6d0932d895fb80af6bf6a389903b9430f4ede0e4f1bbc00af7d740bdb06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 23 23:02:23.671661 containerd[2015]: time="2025-11-23T23:02:23.671143594Z" level=info msg="Container cd2b64ec81c08e9ddbb219cca5d6955e6fbdc91f1598194326abc58599723cf1: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:23.678005 kubelet[3557]: E1123 23:02:23.676076 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:02:23.697286 containerd[2015]: time="2025-11-23T23:02:23.697211146Z" level=info msg="CreateContainer within sandbox \"11bdb6d0932d895fb80af6bf6a389903b9430f4ede0e4f1bbc00af7d740bdb06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cd2b64ec81c08e9ddbb219cca5d6955e6fbdc91f1598194326abc58599723cf1\"" Nov 23 23:02:23.698323 containerd[2015]: time="2025-11-23T23:02:23.698263966Z" level=info msg="StartContainer for \"cd2b64ec81c08e9ddbb219cca5d6955e6fbdc91f1598194326abc58599723cf1\"" Nov 23 23:02:23.700761 containerd[2015]: time="2025-11-23T23:02:23.700664386Z" level=info msg="connecting to shim cd2b64ec81c08e9ddbb219cca5d6955e6fbdc91f1598194326abc58599723cf1" address="unix:///run/containerd/s/178e946813f84e7ee277baa5d37f7312ba1ffc029fda7c3aae262cfaba30c1ca" protocol=ttrpc version=3 Nov 23 23:02:23.746105 systemd[1]: Started cri-containerd-cd2b64ec81c08e9ddbb219cca5d6955e6fbdc91f1598194326abc58599723cf1.scope - libcontainer container cd2b64ec81c08e9ddbb219cca5d6955e6fbdc91f1598194326abc58599723cf1. Nov 23 23:02:23.838388 containerd[2015]: time="2025-11-23T23:02:23.838318319Z" level=info msg="StartContainer for \"cd2b64ec81c08e9ddbb219cca5d6955e6fbdc91f1598194326abc58599723cf1\" returns successfully" Nov 23 23:02:24.099858 systemd[1]: cri-containerd-49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea.scope: Deactivated successfully. Nov 23 23:02:24.100374 systemd[1]: cri-containerd-49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea.scope: Consumed 29.487s CPU time, 110.2M memory peak. Nov 23 23:02:24.107595 containerd[2015]: time="2025-11-23T23:02:24.107411864Z" level=info msg="received container exit event container_id:\"49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea\" id:\"49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea\" pid:3882 exit_status:1 exited_at:{seconds:1763938944 nanos:106972592}" Nov 23 23:02:24.161224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea-rootfs.mount: Deactivated successfully. Nov 23 23:02:24.662207 kubelet[3557]: I1123 23:02:24.662160 3557 scope.go:117] "RemoveContainer" containerID="49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea" Nov 23 23:02:24.667044 containerd[2015]: time="2025-11-23T23:02:24.666995567Z" level=info msg="CreateContainer within sandbox \"509ef28ef7fc1fb9363ac34098119ea5e723e9456e30388b33ffb43c48c4170e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 23 23:02:24.676206 kubelet[3557]: E1123 23:02:24.676156 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:02:24.694760 containerd[2015]: time="2025-11-23T23:02:24.692829395Z" level=info msg="Container 1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:24.697504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1023117783.mount: Deactivated successfully. Nov 23 23:02:24.712972 containerd[2015]: time="2025-11-23T23:02:24.712693631Z" level=info msg="CreateContainer within sandbox \"509ef28ef7fc1fb9363ac34098119ea5e723e9456e30388b33ffb43c48c4170e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913\"" Nov 23 23:02:24.713769 containerd[2015]: time="2025-11-23T23:02:24.713483771Z" level=info msg="StartContainer for \"1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913\"" Nov 23 23:02:24.715384 containerd[2015]: time="2025-11-23T23:02:24.715339127Z" level=info msg="connecting to shim 1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913" address="unix:///run/containerd/s/e384f5052c5f0eb43377b02f517f7aca09efbeef29764606f1af63526aae2a77" protocol=ttrpc version=3 Nov 23 23:02:24.767077 systemd[1]: Started cri-containerd-1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913.scope - libcontainer container 1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913. Nov 23 23:02:24.845017 containerd[2015]: time="2025-11-23T23:02:24.844875768Z" level=info msg="StartContainer for \"1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913\" returns successfully" Nov 23 23:02:26.775522 kubelet[3557]: E1123 23:02:26.775044 3557 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-27?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 23:02:28.674540 kubelet[3557]: E1123 23:02:28.674377 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:02:28.675180 kubelet[3557]: E1123 23:02:28.674863 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228" Nov 23 23:02:29.512909 systemd[1]: cri-containerd-a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f.scope: Deactivated successfully. Nov 23 23:02:29.513434 systemd[1]: cri-containerd-a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f.scope: Consumed 5.691s CPU time, 23M memory peak. Nov 23 23:02:29.518988 containerd[2015]: time="2025-11-23T23:02:29.518862567Z" level=info msg="received container exit event container_id:\"a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f\" id:\"a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f\" pid:3222 exit_status:1 exited_at:{seconds:1763938949 nanos:518327343}" Nov 23 23:02:29.564688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f-rootfs.mount: Deactivated successfully. Nov 23 23:02:29.686892 kubelet[3557]: I1123 23:02:29.686605 3557 scope.go:117] "RemoveContainer" containerID="a2b0a3f916ffb1280a109f27443d8e32bd26b28d54701c9d740c879cbeaa4e3f" Nov 23 23:02:29.691654 containerd[2015]: time="2025-11-23T23:02:29.691600756Z" level=info msg="CreateContainer within sandbox \"ba53400d305bd2c0d66d7e9cf46f0e41a24ec90e67918a93fe58d8c8692ea9d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 23 23:02:29.709096 containerd[2015]: time="2025-11-23T23:02:29.709029904Z" level=info msg="Container ab41ade141566ca3c716aedc37a4ff21fe429563c5e4c1ab40ee6619cd84c7fe: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:29.729407 containerd[2015]: time="2025-11-23T23:02:29.729333568Z" level=info msg="CreateContainer within sandbox \"ba53400d305bd2c0d66d7e9cf46f0e41a24ec90e67918a93fe58d8c8692ea9d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ab41ade141566ca3c716aedc37a4ff21fe429563c5e4c1ab40ee6619cd84c7fe\"" Nov 23 23:02:29.730190 containerd[2015]: time="2025-11-23T23:02:29.730128304Z" level=info msg="StartContainer for \"ab41ade141566ca3c716aedc37a4ff21fe429563c5e4c1ab40ee6619cd84c7fe\"" Nov 23 23:02:29.732154 containerd[2015]: time="2025-11-23T23:02:29.732085756Z" level=info msg="connecting to shim ab41ade141566ca3c716aedc37a4ff21fe429563c5e4c1ab40ee6619cd84c7fe" address="unix:///run/containerd/s/a94ae28a2049d6f5a060da8ea82da29dccff7e1457260fa105b5f8acd19a7cf1" protocol=ttrpc version=3 Nov 23 23:02:29.778055 systemd[1]: Started cri-containerd-ab41ade141566ca3c716aedc37a4ff21fe429563c5e4c1ab40ee6619cd84c7fe.scope - libcontainer container ab41ade141566ca3c716aedc37a4ff21fe429563c5e4c1ab40ee6619cd84c7fe. Nov 23 23:02:29.857683 containerd[2015]: time="2025-11-23T23:02:29.857628497Z" level=info msg="StartContainer for \"ab41ade141566ca3c716aedc37a4ff21fe429563c5e4c1ab40ee6619cd84c7fe\" returns successfully" Nov 23 23:02:32.674497 kubelet[3557]: E1123 23:02:32.674427 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:02:32.675102 kubelet[3557]: E1123 23:02:32.674497 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:02:35.675077 kubelet[3557]: E1123 23:02:35.674959 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7qjg8" podUID="58da0435-c510-4733-869f-85a4fe15eaf3" Nov 23 23:02:35.676281 kubelet[3557]: E1123 23:02:35.675875 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585dccbd85-tdmw2" podUID="3ebffe5d-c943-4ecd-a570-27b8df3681f4" Nov 23 23:02:36.282172 systemd[1]: cri-containerd-1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913.scope: Deactivated successfully. Nov 23 23:02:36.284823 containerd[2015]: time="2025-11-23T23:02:36.284520069Z" level=info msg="received container exit event container_id:\"1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913\" id:\"1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913\" pid:6359 exit_status:1 exited_at:{seconds:1763938956 nanos:283345893}" Nov 23 23:02:36.329258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913-rootfs.mount: Deactivated successfully. Nov 23 23:02:36.715583 kubelet[3557]: I1123 23:02:36.715441 3557 scope.go:117] "RemoveContainer" containerID="49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea" Nov 23 23:02:36.716946 kubelet[3557]: I1123 23:02:36.716334 3557 scope.go:117] "RemoveContainer" containerID="1fc553561c23cc363565c7dfeac162a181f4125a05d30f933f982e291abc5913" Nov 23 23:02:36.716946 kubelet[3557]: E1123 23:02:36.716598 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-tpwn2_tigera-operator(718a1a35-2ee9-4e4d-b7fb-2565f19bd904)\"" pod="tigera-operator/tigera-operator-7dcd859c48-tpwn2" podUID="718a1a35-2ee9-4e4d-b7fb-2565f19bd904" Nov 23 23:02:36.720552 containerd[2015]: time="2025-11-23T23:02:36.720497879Z" level=info msg="RemoveContainer for \"49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea\"" Nov 23 23:02:36.732480 containerd[2015]: time="2025-11-23T23:02:36.732319343Z" level=info msg="RemoveContainer for \"49347edbf13f527c6517d3e988223f35a37b00889d122ed09ab757f37bfc0bea\" returns successfully" Nov 23 23:02:36.775742 kubelet[3557]: E1123 23:02:36.775641 3557 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-27?timeout=10s\": context deadline exceeded" Nov 23 23:02:38.673932 kubelet[3557]: E1123 23:02:38.673867 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6574bf4f5d-qz2dt" podUID="1265050f-2f3c-4c9a-a19e-43d1823e072d" Nov 23 23:02:42.674381 kubelet[3557]: E1123 23:02:42.674298 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-xflhj" podUID="39270bf4-b6a6-4d62-8a14-a5e6fd018861" Nov 23 23:02:43.674357 kubelet[3557]: E1123 23:02:43.674014 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-2c5lm" podUID="82cd0774-54f5-4b66-8a2d-bd758439764f" Nov 23 23:02:43.675659 kubelet[3557]: E1123 23:02:43.675577 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-856dd64f49-b8qsw" podUID="5cc67e78-541e-4794-9086-b55b57263fd2" Nov 23 23:02:43.677038 kubelet[3557]: E1123 23:02:43.675805 3557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fcd9fb754-gfvhg" podUID="a3a85933-215d-434f-beb8-3b039c057228"