Nov 23 23:01:46.169047 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 23 23:01:46.169095 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:53:53 -00 2025 Nov 23 23:01:46.169119 kernel: KASLR disabled due to lack of seed Nov 23 23:01:46.169136 kernel: efi: EFI v2.7 by EDK II Nov 23 23:01:46.169152 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Nov 23 23:01:46.169166 kernel: secureboot: Secure boot disabled Nov 23 23:01:46.169183 kernel: ACPI: Early table checksum verification disabled Nov 23 23:01:46.169198 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 23 23:01:46.169213 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 23 23:01:46.169229 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 23 23:01:46.169245 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 23 23:01:46.169264 kernel: ACPI: FACS 0x0000000078630000 000040 Nov 23 23:01:46.169279 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 23 23:01:46.169295 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 23 23:01:46.169313 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 23 23:01:46.169330 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 23 23:01:46.169350 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 23 23:01:46.169367 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 23 23:01:46.169383 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 23 23:01:46.169399 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 23 23:01:46.169415 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 23 23:01:46.169431 kernel: printk: legacy bootconsole [uart0] enabled Nov 23 23:01:46.169447 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:01:46.169463 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 23 23:01:46.169480 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Nov 23 23:01:46.169496 kernel: Zone ranges: Nov 23 23:01:46.169511 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 23 23:01:46.169531 kernel: DMA32 empty Nov 23 23:01:46.169547 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 23 23:01:46.169563 kernel: Device empty Nov 23 23:01:46.169600 kernel: Movable zone start for each node Nov 23 23:01:46.169619 kernel: Early memory node ranges Nov 23 23:01:46.169636 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 23 23:01:46.169652 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 23 23:01:46.169669 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 23 23:01:46.169686 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 23 23:01:46.169702 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 23 23:01:46.169718 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 23 23:01:46.169734 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 23 23:01:46.169758 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 23 23:01:46.169782 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 23 23:01:46.169799 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 23 23:01:46.169817 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Nov 23 23:01:46.169834 kernel: psci: probing for conduit method from ACPI. Nov 23 23:01:46.169857 kernel: psci: PSCIv1.0 detected in firmware. Nov 23 23:01:46.169874 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:01:46.169891 kernel: psci: Trusted OS migration not required Nov 23 23:01:46.169908 kernel: psci: SMC Calling Convention v1.1 Nov 23 23:01:46.169925 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 23 23:01:46.169969 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:01:46.169998 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:01:46.170015 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 23:01:46.170032 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:01:46.170049 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:01:46.170066 kernel: CPU features: detected: Spectre-v2 Nov 23 23:01:46.170089 kernel: CPU features: detected: Spectre-v3a Nov 23 23:01:46.170106 kernel: CPU features: detected: Spectre-BHB Nov 23 23:01:46.170123 kernel: CPU features: detected: ARM erratum 1742098 Nov 23 23:01:46.170139 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 23 23:01:46.170155 kernel: alternatives: applying boot alternatives Nov 23 23:01:46.170174 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:01:46.170192 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:01:46.170209 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:01:46.170225 kernel: Fallback order for Node 0: 0 Nov 23 23:01:46.170242 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Nov 23 23:01:46.170259 kernel: Policy zone: Normal Nov 23 23:01:46.170280 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:01:46.170296 kernel: software IO TLB: area num 2. Nov 23 23:01:46.170315 kernel: software IO TLB: mapped [mem 0x0000000074551000-0x0000000078551000] (64MB) Nov 23 23:01:46.170332 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 23:01:46.170348 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:01:46.170366 kernel: rcu: RCU event tracing is enabled. Nov 23 23:01:46.170383 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 23:01:46.170399 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:01:46.170417 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:01:46.170434 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:01:46.170451 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 23:01:46.170472 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:01:46.170489 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:01:46.170506 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:01:46.170522 kernel: GICv3: 96 SPIs implemented Nov 23 23:01:46.170539 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:01:46.170555 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:01:46.170572 kernel: GICv3: GICv3 features: 16 PPIs Nov 23 23:01:46.170588 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 23:01:46.170604 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 23 23:01:46.170622 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 23 23:01:46.170639 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Nov 23 23:01:46.170658 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Nov 23 23:01:46.170680 kernel: GICv3: using LPI property table @0x0000000400110000 Nov 23 23:01:46.170697 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 23 23:01:46.170715 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Nov 23 23:01:46.170731 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:01:46.170748 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 23 23:01:46.170765 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 23 23:01:46.170781 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 23 23:01:46.170799 kernel: Console: colour dummy device 80x25 Nov 23 23:01:46.170816 kernel: printk: legacy console [tty1] enabled Nov 23 23:01:46.170833 kernel: ACPI: Core revision 20240827 Nov 23 23:01:46.170850 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 23 23:01:46.170871 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:01:46.170888 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:01:46.170905 kernel: landlock: Up and running. Nov 23 23:01:46.170921 kernel: SELinux: Initializing. Nov 23 23:01:46.170939 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:01:46.171021 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:01:46.171048 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:01:46.171066 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:01:46.171092 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:01:46.171109 kernel: Remapping and enabling EFI services. Nov 23 23:01:46.171126 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:01:46.171143 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:01:46.171161 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 23 23:01:46.171178 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Nov 23 23:01:46.171195 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 23 23:01:46.171212 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 23:01:46.171229 kernel: SMP: Total of 2 processors activated. Nov 23 23:01:46.171252 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:01:46.171281 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:01:46.171299 kernel: CPU features: detected: 32-bit EL1 Support Nov 23 23:01:46.171322 kernel: CPU features: detected: CRC32 instructions Nov 23 23:01:46.171341 kernel: alternatives: applying system-wide alternatives Nov 23 23:01:46.171360 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Nov 23 23:01:46.171380 kernel: devtmpfs: initialized Nov 23 23:01:46.171399 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:01:46.171421 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 23:01:46.171440 kernel: 16880 pages in range for non-PLT usage Nov 23 23:01:46.171457 kernel: 508400 pages in range for PLT usage Nov 23 23:01:46.171476 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:01:46.171494 kernel: SMBIOS 3.0.0 present. Nov 23 23:01:46.171512 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 23 23:01:46.171530 kernel: DMI: Memory slots populated: 0/0 Nov 23 23:01:46.171548 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:01:46.171566 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:01:46.171589 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:01:46.171607 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:01:46.171625 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:01:46.171642 kernel: audit: type=2000 audit(0.228:1): state=initialized audit_enabled=0 res=1 Nov 23 23:01:46.171660 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:01:46.171678 kernel: cpuidle: using governor menu Nov 23 23:01:46.171696 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:01:46.171714 kernel: ASID allocator initialised with 65536 entries Nov 23 23:01:46.171731 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:01:46.171753 kernel: Serial: AMBA PL011 UART driver Nov 23 23:01:46.171771 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:01:46.171789 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:01:46.171808 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:01:46.171826 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:01:46.171845 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:01:46.171863 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:01:46.171882 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:01:46.171900 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:01:46.171925 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:01:46.173571 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:01:46.173644 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:01:46.173664 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:01:46.173683 kernel: ACPI: Interpreter enabled Nov 23 23:01:46.173700 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:01:46.173718 kernel: ACPI: MCFG table detected, 1 entries Nov 23 23:01:46.173736 kernel: ACPI: CPU0 has been hot-added Nov 23 23:01:46.173755 kernel: ACPI: CPU1 has been hot-added Nov 23 23:01:46.173784 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Nov 23 23:01:46.174655 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 23:01:46.174868 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 23:01:46.176230 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 23:01:46.176451 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Nov 23 23:01:46.176642 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Nov 23 23:01:46.176668 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 23 23:01:46.176699 kernel: acpiphp: Slot [1] registered Nov 23 23:01:46.176718 kernel: acpiphp: Slot [2] registered Nov 23 23:01:46.176735 kernel: acpiphp: Slot [3] registered Nov 23 23:01:46.176753 kernel: acpiphp: Slot [4] registered Nov 23 23:01:46.176771 kernel: acpiphp: Slot [5] registered Nov 23 23:01:46.176788 kernel: acpiphp: Slot [6] registered Nov 23 23:01:46.176806 kernel: acpiphp: Slot [7] registered Nov 23 23:01:46.176823 kernel: acpiphp: Slot [8] registered Nov 23 23:01:46.176840 kernel: acpiphp: Slot [9] registered Nov 23 23:01:46.176858 kernel: acpiphp: Slot [10] registered Nov 23 23:01:46.176880 kernel: acpiphp: Slot [11] registered Nov 23 23:01:46.176898 kernel: acpiphp: Slot [12] registered Nov 23 23:01:46.176916 kernel: acpiphp: Slot [13] registered Nov 23 23:01:46.176935 kernel: acpiphp: Slot [14] registered Nov 23 23:01:46.178047 kernel: acpiphp: Slot [15] registered Nov 23 23:01:46.178079 kernel: acpiphp: Slot [16] registered Nov 23 23:01:46.178098 kernel: acpiphp: Slot [17] registered Nov 23 23:01:46.178116 kernel: acpiphp: Slot [18] registered Nov 23 23:01:46.178135 kernel: acpiphp: Slot [19] registered Nov 23 23:01:46.178162 kernel: acpiphp: Slot [20] registered Nov 23 23:01:46.178180 kernel: acpiphp: Slot [21] registered Nov 23 23:01:46.178197 kernel: acpiphp: Slot [22] registered Nov 23 23:01:46.178215 kernel: acpiphp: Slot [23] registered Nov 23 23:01:46.178232 kernel: acpiphp: Slot [24] registered Nov 23 23:01:46.178250 kernel: acpiphp: Slot [25] registered Nov 23 23:01:46.178267 kernel: acpiphp: Slot [26] registered Nov 23 23:01:46.178285 kernel: acpiphp: Slot [27] registered Nov 23 23:01:46.178303 kernel: acpiphp: Slot [28] registered Nov 23 23:01:46.178321 kernel: acpiphp: Slot [29] registered Nov 23 23:01:46.178343 kernel: acpiphp: Slot [30] registered Nov 23 23:01:46.178360 kernel: acpiphp: Slot [31] registered Nov 23 23:01:46.178378 kernel: PCI host bridge to bus 0000:00 Nov 23 23:01:46.178633 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 23 23:01:46.178809 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 23:01:46.179036 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 23 23:01:46.179236 kernel: pci_bus 0000:00: root bus resource [bus 00] Nov 23 23:01:46.179489 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Nov 23 23:01:46.179857 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Nov 23 23:01:46.182254 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Nov 23 23:01:46.182515 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Nov 23 23:01:46.182717 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Nov 23 23:01:46.182915 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 23 23:01:46.185277 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Nov 23 23:01:46.185506 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Nov 23 23:01:46.185731 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Nov 23 23:01:46.185932 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Nov 23 23:01:46.186199 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 23 23:01:46.186392 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 23 23:01:46.186566 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 23:01:46.186754 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 23 23:01:46.186786 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 23:01:46.186805 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 23:01:46.186824 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 23:01:46.186842 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 23:01:46.186860 kernel: iommu: Default domain type: Translated Nov 23 23:01:46.186878 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:01:46.186896 kernel: efivars: Registered efivars operations Nov 23 23:01:46.186914 kernel: vgaarb: loaded Nov 23 23:01:46.186939 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:01:46.189053 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:01:46.189075 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:01:46.189095 kernel: pnp: PnP ACPI init Nov 23 23:01:46.189382 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 23 23:01:46.189413 kernel: pnp: PnP ACPI: found 1 devices Nov 23 23:01:46.189432 kernel: NET: Registered PF_INET protocol family Nov 23 23:01:46.189452 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:01:46.189486 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:01:46.189507 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:01:46.189659 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:01:46.189682 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:01:46.189703 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:01:46.189722 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:01:46.189740 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:01:46.189758 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:01:46.189776 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:01:46.189802 kernel: kvm [1]: HYP mode not available Nov 23 23:01:46.189820 kernel: Initialise system trusted keyrings Nov 23 23:01:46.189839 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:01:46.189857 kernel: Key type asymmetric registered Nov 23 23:01:46.189875 kernel: Asymmetric key parser 'x509' registered Nov 23 23:01:46.189893 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:01:46.189911 kernel: io scheduler mq-deadline registered Nov 23 23:01:46.189929 kernel: io scheduler kyber registered Nov 23 23:01:46.189981 kernel: io scheduler bfq registered Nov 23 23:01:46.190289 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 23 23:01:46.190323 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 23:01:46.190342 kernel: ACPI: button: Power Button [PWRB] Nov 23 23:01:46.190360 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 23 23:01:46.190378 kernel: ACPI: button: Sleep Button [SLPB] Nov 23 23:01:46.190396 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:01:46.190415 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 23 23:01:46.190653 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 23 23:01:46.190695 kernel: printk: legacy console [ttyS0] disabled Nov 23 23:01:46.190715 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 23 23:01:46.190734 kernel: printk: legacy console [ttyS0] enabled Nov 23 23:01:46.190752 kernel: printk: legacy bootconsole [uart0] disabled Nov 23 23:01:46.190770 kernel: thunder_xcv, ver 1.0 Nov 23 23:01:46.190789 kernel: thunder_bgx, ver 1.0 Nov 23 23:01:46.190807 kernel: nicpf, ver 1.0 Nov 23 23:01:46.190825 kernel: nicvf, ver 1.0 Nov 23 23:01:46.193208 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:01:46.193449 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:01:45 UTC (1763938905) Nov 23 23:01:46.193477 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:01:46.193497 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Nov 23 23:01:46.193516 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:01:46.193534 kernel: watchdog: NMI not fully supported Nov 23 23:01:46.193553 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:01:46.193570 kernel: Segment Routing with IPv6 Nov 23 23:01:46.193614 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:01:46.193633 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:01:46.193660 kernel: Key type dns_resolver registered Nov 23 23:01:46.193678 kernel: registered taskstats version 1 Nov 23 23:01:46.193696 kernel: Loading compiled-in X.509 certificates Nov 23 23:01:46.193716 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 00c36da29593053a7da9cd3c5945ae69451ce339' Nov 23 23:01:46.193734 kernel: Demotion targets for Node 0: null Nov 23 23:01:46.193752 kernel: Key type .fscrypt registered Nov 23 23:01:46.193769 kernel: Key type fscrypt-provisioning registered Nov 23 23:01:46.193786 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:01:46.193804 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:01:46.193827 kernel: ima: No architecture policies found Nov 23 23:01:46.193844 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:01:46.193862 kernel: clk: Disabling unused clocks Nov 23 23:01:46.193879 kernel: PM: genpd: Disabling unused power domains Nov 23 23:01:46.193897 kernel: Warning: unable to open an initial console. Nov 23 23:01:46.193915 kernel: Freeing unused kernel memory: 39552K Nov 23 23:01:46.193934 kernel: Run /init as init process Nov 23 23:01:46.196115 kernel: with arguments: Nov 23 23:01:46.196149 kernel: /init Nov 23 23:01:46.196181 kernel: with environment: Nov 23 23:01:46.196200 kernel: HOME=/ Nov 23 23:01:46.196219 kernel: TERM=linux Nov 23 23:01:46.196240 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:01:46.196266 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:01:46.196287 systemd[1]: Detected virtualization amazon. Nov 23 23:01:46.196306 systemd[1]: Detected architecture arm64. Nov 23 23:01:46.196330 systemd[1]: Running in initrd. Nov 23 23:01:46.196349 systemd[1]: No hostname configured, using default hostname. Nov 23 23:01:46.196370 systemd[1]: Hostname set to . Nov 23 23:01:46.196389 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:01:46.196408 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:01:46.196428 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:01:46.196449 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:01:46.196470 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:01:46.196496 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:01:46.196517 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:01:46.196540 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:01:46.196562 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:01:46.196583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:01:46.196602 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:01:46.196623 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:01:46.196647 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:01:46.196667 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:01:46.196687 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:01:46.196706 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:01:46.196725 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:01:46.196745 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:01:46.196765 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:01:46.196785 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:01:46.196804 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:01:46.196830 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:01:46.196849 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:01:46.196869 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:01:46.196888 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:01:46.196908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:01:46.196927 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:01:46.197015 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:01:46.197044 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:01:46.197072 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:01:46.197092 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:01:46.197111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:01:46.197131 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:01:46.197152 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:01:46.197176 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:01:46.197196 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:01:46.197275 systemd-journald[258]: Collecting audit messages is disabled. Nov 23 23:01:46.197318 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:01:46.197343 kernel: Bridge firewalling registered Nov 23 23:01:46.197364 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:01:46.197384 systemd-journald[258]: Journal started Nov 23 23:01:46.199988 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2ddc8a4d258b752b944b4ec21ea333) is 8M, max 75.3M, 67.3M free. Nov 23 23:01:46.150580 systemd-modules-load[259]: Inserted module 'overlay' Nov 23 23:01:46.203812 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:01:46.189631 systemd-modules-load[259]: Inserted module 'br_netfilter' Nov 23 23:01:46.206592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:01:46.211014 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:01:46.223199 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:01:46.235174 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:01:46.240172 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:01:46.253251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:01:46.293367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:01:46.299161 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:01:46.304059 systemd-tmpfiles[277]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:01:46.311061 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:01:46.319282 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:01:46.329322 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:01:46.341430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:01:46.391838 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:01:46.441004 systemd-resolved[299]: Positive Trust Anchors: Nov 23 23:01:46.443362 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:01:46.446884 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:01:46.570991 kernel: SCSI subsystem initialized Nov 23 23:01:46.578993 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:01:46.593084 kernel: iscsi: registered transport (tcp) Nov 23 23:01:46.615754 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:01:46.615836 kernel: QLogic iSCSI HBA Driver Nov 23 23:01:46.651133 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:01:46.683876 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:01:46.696800 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:01:46.707998 kernel: random: crng init done Nov 23 23:01:46.708239 systemd-resolved[299]: Defaulting to hostname 'linux'. Nov 23 23:01:46.711714 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:01:46.717097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:01:46.791641 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:01:46.798317 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:01:46.904010 kernel: raid6: neonx8 gen() 6518 MB/s Nov 23 23:01:46.922013 kernel: raid6: neonx4 gen() 6543 MB/s Nov 23 23:01:46.938995 kernel: raid6: neonx2 gen() 5433 MB/s Nov 23 23:01:46.956002 kernel: raid6: neonx1 gen() 3947 MB/s Nov 23 23:01:46.974001 kernel: raid6: int64x8 gen() 3640 MB/s Nov 23 23:01:46.992015 kernel: raid6: int64x4 gen() 3708 MB/s Nov 23 23:01:47.009019 kernel: raid6: int64x2 gen() 3572 MB/s Nov 23 23:01:47.027138 kernel: raid6: int64x1 gen() 2730 MB/s Nov 23 23:01:47.027211 kernel: raid6: using algorithm neonx4 gen() 6543 MB/s Nov 23 23:01:47.046169 kernel: raid6: .... xor() 4592 MB/s, rmw enabled Nov 23 23:01:47.046251 kernel: raid6: using neon recovery algorithm Nov 23 23:01:47.055731 kernel: xor: measuring software checksum speed Nov 23 23:01:47.055828 kernel: 8regs : 12889 MB/sec Nov 23 23:01:47.058334 kernel: 32regs : 11786 MB/sec Nov 23 23:01:47.058412 kernel: arm64_neon : 9184 MB/sec Nov 23 23:01:47.058439 kernel: xor: using function: 8regs (12889 MB/sec) Nov 23 23:01:47.153998 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:01:47.167037 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:01:47.174237 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:01:47.226830 systemd-udevd[507]: Using default interface naming scheme 'v255'. Nov 23 23:01:47.240085 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:01:47.245997 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:01:47.292691 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Nov 23 23:01:47.345719 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:01:47.354394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:01:47.490656 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:01:47.507692 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:01:47.665064 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 23:01:47.665134 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 23 23:01:47.672997 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 23 23:01:47.675014 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 23 23:01:47.681725 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 23 23:01:47.682149 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 23 23:01:47.687285 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 23 23:01:47.697628 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 23:01:47.697718 kernel: GPT:9289727 != 33554431 Nov 23 23:01:47.699163 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 23:01:47.700408 kernel: GPT:9289727 != 33554431 Nov 23 23:01:47.701679 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 23:01:47.702793 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 23:01:47.720407 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:01:47.723140 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:a5:f2:e4:fb:d9 Nov 23 23:01:47.720710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:01:47.731526 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:01:47.737921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:01:47.743388 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:01:47.755541 (udev-worker)[562]: Network interface NamePolicy= disabled on kernel command line. Nov 23 23:01:47.790999 kernel: nvme nvme0: using unchecked data buffer Nov 23 23:01:47.797173 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:01:47.933464 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 23 23:01:47.988206 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 23 23:01:48.011588 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:01:48.043047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 23 23:01:48.068918 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 23 23:01:48.075054 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 23 23:01:48.078613 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:01:48.082150 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:01:48.091995 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:01:48.100735 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:01:48.108073 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:01:48.141828 disk-uuid[685]: Primary Header is updated. Nov 23 23:01:48.141828 disk-uuid[685]: Secondary Entries is updated. Nov 23 23:01:48.141828 disk-uuid[685]: Secondary Header is updated. Nov 23 23:01:48.155368 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 23:01:48.166383 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:01:48.169140 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 23:01:49.174082 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 23 23:01:49.175532 disk-uuid[686]: The operation has completed successfully. Nov 23 23:01:49.371291 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:01:49.373869 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:01:49.454699 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:01:49.480707 sh[954]: Success Nov 23 23:01:49.511912 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:01:49.512014 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:01:49.512044 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:01:49.526991 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:01:49.627536 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:01:49.635753 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:01:49.659759 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:01:49.680000 kernel: BTRFS: device fsid 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (977) Nov 23 23:01:49.684856 kernel: BTRFS info (device dm-0): first mount of filesystem 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 Nov 23 23:01:49.684941 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:01:49.828460 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 23 23:01:49.828544 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:01:49.828573 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:01:49.853822 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:01:49.858425 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:01:49.864472 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:01:49.867779 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:01:49.884248 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:01:49.943020 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Nov 23 23:01:49.943105 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:01:49.946840 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:01:49.962862 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 23:01:49.962936 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 23:01:49.973024 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:01:49.978092 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:01:49.983531 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:01:50.085934 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:01:50.098399 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:01:50.176903 systemd-networkd[1146]: lo: Link UP Nov 23 23:01:50.177355 systemd-networkd[1146]: lo: Gained carrier Nov 23 23:01:50.180755 systemd-networkd[1146]: Enumeration completed Nov 23 23:01:50.181686 systemd-networkd[1146]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:01:50.181693 systemd-networkd[1146]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:01:50.182571 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:01:50.184767 systemd[1]: Reached target network.target - Network. Nov 23 23:01:50.199190 systemd-networkd[1146]: eth0: Link UP Nov 23 23:01:50.199198 systemd-networkd[1146]: eth0: Gained carrier Nov 23 23:01:50.199223 systemd-networkd[1146]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:01:50.230101 systemd-networkd[1146]: eth0: DHCPv4 address 172.31.29.95/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 23 23:01:50.551082 ignition[1072]: Ignition 2.22.0 Nov 23 23:01:50.551604 ignition[1072]: Stage: fetch-offline Nov 23 23:01:50.552608 ignition[1072]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:01:50.552632 ignition[1072]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 23:01:50.553715 ignition[1072]: Ignition finished successfully Nov 23 23:01:50.563855 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:01:50.569244 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 23:01:50.636667 ignition[1159]: Ignition 2.22.0 Nov 23 23:01:50.636699 ignition[1159]: Stage: fetch Nov 23 23:01:50.637281 ignition[1159]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:01:50.637308 ignition[1159]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 23:01:50.637468 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 23:01:50.663202 ignition[1159]: PUT result: OK Nov 23 23:01:50.672625 ignition[1159]: parsed url from cmdline: "" Nov 23 23:01:50.672644 ignition[1159]: no config URL provided Nov 23 23:01:50.672659 ignition[1159]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:01:50.672687 ignition[1159]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:01:50.672729 ignition[1159]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 23:01:50.675354 ignition[1159]: PUT result: OK Nov 23 23:01:50.678203 ignition[1159]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 23 23:01:50.681347 ignition[1159]: GET result: OK Nov 23 23:01:50.685879 ignition[1159]: parsing config with SHA512: 7df92b471a91a3dd37b10838b0284217e43d7e2d8abbe19e75dfc2830e761a1175860b12da3a503f9179cdd7e86b22f659e5e78c772f11aa8b3bf658fb5929d3 Nov 23 23:01:50.699540 unknown[1159]: fetched base config from "system" Nov 23 23:01:50.700309 unknown[1159]: fetched base config from "system" Nov 23 23:01:50.701776 ignition[1159]: fetch: fetch complete Nov 23 23:01:50.700335 unknown[1159]: fetched user config from "aws" Nov 23 23:01:50.701791 ignition[1159]: fetch: fetch passed Nov 23 23:01:50.701904 ignition[1159]: Ignition finished successfully Nov 23 23:01:50.713477 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 23:01:50.724357 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:01:50.783685 ignition[1166]: Ignition 2.22.0 Nov 23 23:01:50.784282 ignition[1166]: Stage: kargs Nov 23 23:01:50.784835 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:01:50.784859 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 23:01:50.785024 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 23:01:50.790199 ignition[1166]: PUT result: OK Nov 23 23:01:50.800928 ignition[1166]: kargs: kargs passed Nov 23 23:01:50.801115 ignition[1166]: Ignition finished successfully Nov 23 23:01:50.806495 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:01:50.812534 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:01:50.861428 ignition[1172]: Ignition 2.22.0 Nov 23 23:01:50.861463 ignition[1172]: Stage: disks Nov 23 23:01:50.862104 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:01:50.862130 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 23:01:50.862289 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 23:01:50.866296 ignition[1172]: PUT result: OK Nov 23 23:01:50.883269 ignition[1172]: disks: disks passed Nov 23 23:01:50.883472 ignition[1172]: Ignition finished successfully Nov 23 23:01:50.888450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:01:50.894341 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:01:50.899679 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:01:50.905521 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:01:50.908645 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:01:50.913355 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:01:50.919001 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:01:50.972727 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 23:01:50.978195 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:01:50.986817 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:01:51.127991 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fa3f8731-d4e3-4e51-b6db-fa404206cf07 r/w with ordered data mode. Quota mode: none. Nov 23 23:01:51.129265 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:01:51.134095 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:01:51.140920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:01:51.149328 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:01:51.154695 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 23:01:51.154805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:01:51.154863 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:01:51.188919 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:01:51.195570 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:01:51.209002 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Nov 23 23:01:51.213441 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:01:51.213522 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:01:51.220679 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 23:01:51.220752 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 23:01:51.224292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:01:51.603783 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:01:51.625351 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:01:51.645577 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:01:51.655041 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:01:52.020302 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:01:52.027818 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:01:52.032791 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:01:52.066657 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:01:52.073623 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:01:52.100055 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:01:52.125920 ignition[1315]: INFO : Ignition 2.22.0 Nov 23 23:01:52.125920 ignition[1315]: INFO : Stage: mount Nov 23 23:01:52.132155 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:01:52.132155 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 23:01:52.137802 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 23:01:52.140853 ignition[1315]: INFO : PUT result: OK Nov 23 23:01:52.146282 ignition[1315]: INFO : mount: mount passed Nov 23 23:01:52.148134 ignition[1315]: INFO : Ignition finished successfully Nov 23 23:01:52.151284 systemd-networkd[1146]: eth0: Gained IPv6LL Nov 23 23:01:52.154895 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:01:52.161771 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:01:52.196366 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:01:52.231072 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Nov 23 23:01:52.231162 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:01:52.233062 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:01:52.240759 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 23 23:01:52.240842 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 23 23:01:52.245279 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:01:52.318187 ignition[1343]: INFO : Ignition 2.22.0 Nov 23 23:01:52.318187 ignition[1343]: INFO : Stage: files Nov 23 23:01:52.323853 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:01:52.323853 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 23:01:52.323853 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 23:01:52.323853 ignition[1343]: INFO : PUT result: OK Nov 23 23:01:52.336172 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:01:52.344561 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:01:52.344561 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:01:52.362335 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:01:52.365757 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:01:52.369224 unknown[1343]: wrote ssh authorized keys file for user: core Nov 23 23:01:52.371766 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:01:52.388266 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:01:52.388266 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 23 23:01:52.482578 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:01:52.617617 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:01:52.622028 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:01:52.622028 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:01:52.622028 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:01:52.622028 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:01:52.622028 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:01:52.622028 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:01:52.622028 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:01:52.622028 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:01:52.652958 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:01:52.652958 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:01:52.652958 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:01:52.652958 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:01:52.652958 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:01:52.652958 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 23 23:01:53.092467 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 23:01:53.507031 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:01:53.507031 ignition[1343]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 23:01:53.518106 ignition[1343]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:01:53.518106 ignition[1343]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:01:53.518106 ignition[1343]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 23:01:53.518106 ignition[1343]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:01:53.518106 ignition[1343]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:01:53.518106 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:01:53.518106 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:01:53.518106 ignition[1343]: INFO : files: files passed Nov 23 23:01:53.518106 ignition[1343]: INFO : Ignition finished successfully Nov 23 23:01:53.552484 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:01:53.557777 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:01:53.565727 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:01:53.590768 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:01:53.593863 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:01:53.613707 initrd-setup-root-after-ignition[1373]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:01:53.617937 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:01:53.622109 initrd-setup-root-after-ignition[1373]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:01:53.625179 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:01:53.636734 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:01:53.644102 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:01:53.718268 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:01:53.720276 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:01:53.726488 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:01:53.729979 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:01:53.740529 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:01:53.747698 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:01:53.796115 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:01:53.806229 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:01:53.859710 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:01:53.865735 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:01:53.869608 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:01:53.874463 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:01:53.875190 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:01:53.882933 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:01:53.893401 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:01:53.897930 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:01:53.906459 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:01:53.916912 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:01:53.922443 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:01:53.925613 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:01:53.928611 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:01:53.938003 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:01:53.945666 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:01:53.949500 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:01:53.954305 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:01:53.954773 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:01:53.963999 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:01:53.967512 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:01:53.976122 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:01:53.978531 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:01:53.982095 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:01:53.982366 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:01:53.990630 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:01:53.990928 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:01:53.995538 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:01:53.996089 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:01:54.002931 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:01:54.018229 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:01:54.024913 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:01:54.025904 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:01:54.036731 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:01:54.037241 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:01:54.059890 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:01:54.066125 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:01:54.088933 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:01:54.098857 ignition[1397]: INFO : Ignition 2.22.0 Nov 23 23:01:54.098857 ignition[1397]: INFO : Stage: umount Nov 23 23:01:54.104607 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:01:54.104607 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 23 23:01:54.104607 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 23 23:01:54.104607 ignition[1397]: INFO : PUT result: OK Nov 23 23:01:54.109070 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:01:54.127132 ignition[1397]: INFO : umount: umount passed Nov 23 23:01:54.127132 ignition[1397]: INFO : Ignition finished successfully Nov 23 23:01:54.109351 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:01:54.115736 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:01:54.117030 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:01:54.119550 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:01:54.119706 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:01:54.125500 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:01:54.125619 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:01:54.131451 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 23:01:54.131554 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 23:01:54.134662 systemd[1]: Stopped target network.target - Network. Nov 23 23:01:54.141591 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:01:54.141730 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:01:54.149176 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:01:54.152400 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:01:54.156362 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:01:54.159919 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:01:54.163932 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:01:54.168037 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:01:54.168124 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:01:54.170453 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:01:54.170596 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:01:54.174800 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:01:54.174921 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:01:54.179359 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:01:54.179452 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:01:54.186378 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:01:54.186499 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:01:54.189809 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:01:54.227520 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:01:54.239119 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:01:54.239580 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:01:54.250469 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:01:54.253808 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:01:54.254377 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:01:54.268197 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:01:54.270039 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:01:54.274823 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:01:54.274909 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:01:54.288937 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:01:54.293070 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:01:54.293188 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:01:54.297798 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:01:54.299787 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:01:54.305524 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:01:54.305634 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:01:54.308487 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:01:54.308595 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:01:54.316551 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:01:54.340995 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:01:54.341172 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:01:54.368880 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:01:54.369214 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:01:54.379844 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:01:54.380242 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:01:54.391155 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:01:54.391317 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:01:54.394776 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:01:54.394866 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:01:54.402312 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:01:54.402435 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:01:54.407941 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:01:54.408110 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:01:54.421541 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:01:54.421677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:01:54.433920 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:01:54.436701 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:01:54.436831 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:01:54.442767 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:01:54.442871 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:01:54.454126 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 23 23:01:54.454231 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:01:54.457336 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:01:54.457425 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:01:54.475163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:01:54.475285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:01:54.484872 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 23:01:54.485038 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 23 23:01:54.485293 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 23:01:54.485397 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:01:54.507410 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:01:54.508243 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:01:54.516626 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:01:54.521877 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:01:54.563821 systemd[1]: Switching root. Nov 23 23:01:54.616015 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). Nov 23 23:01:54.616099 systemd-journald[258]: Journal stopped Nov 23 23:01:57.131935 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:01:57.132124 kernel: SELinux: policy capability open_perms=1 Nov 23 23:01:57.132165 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:01:57.132207 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:01:57.132239 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:01:57.132270 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:01:57.132302 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:01:57.132333 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:01:57.132363 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:01:57.132395 kernel: audit: type=1403 audit(1763938915.074:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:01:57.132442 systemd[1]: Successfully loaded SELinux policy in 116.474ms. Nov 23 23:01:57.132490 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.609ms. Nov 23 23:01:57.132528 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:01:57.132562 systemd[1]: Detected virtualization amazon. Nov 23 23:01:57.132595 systemd[1]: Detected architecture arm64. Nov 23 23:01:57.132625 systemd[1]: Detected first boot. Nov 23 23:01:57.132658 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:01:57.132694 zram_generator::config[1441]: No configuration found. Nov 23 23:01:57.132735 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:01:57.132767 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:01:57.132803 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:01:57.132838 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:01:57.132870 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:01:57.132902 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:01:57.132937 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:01:57.133016 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:01:57.133053 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:01:57.133105 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:01:57.133139 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:01:57.133174 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:01:57.133207 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:01:57.133250 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:01:57.133286 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:01:57.133322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:01:57.133352 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:01:57.133389 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:01:57.133420 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:01:57.133453 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:01:57.133507 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 23 23:01:57.133548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:01:57.133580 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:01:57.133610 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:01:57.133640 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:01:57.133678 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:01:57.133713 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:01:57.133754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:01:57.133784 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:01:57.133816 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:01:57.133850 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:01:57.133882 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:01:57.133912 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:01:57.133942 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:01:57.134042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:01:57.134086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:01:57.134116 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:01:57.134147 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:01:57.134177 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:01:57.134211 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:01:57.134247 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:01:57.134277 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:01:57.134312 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:01:57.134352 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:01:57.134385 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:01:57.134416 systemd[1]: Reached target machines.target - Containers. Nov 23 23:01:57.134445 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:01:57.134474 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:01:57.134504 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:01:57.134535 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:01:57.134564 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:01:57.134596 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:01:57.134632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:01:57.134662 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:01:57.134695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:01:57.134729 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:01:57.134759 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:01:57.134787 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:01:57.134817 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:01:57.134846 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:01:57.134882 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:01:57.134912 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:01:57.134941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:01:57.135052 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:01:57.135088 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:01:57.135118 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:01:57.135148 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:01:57.135188 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:01:57.135224 systemd[1]: Stopped verity-setup.service. Nov 23 23:01:57.135255 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:01:57.135294 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:01:57.135325 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:01:57.135356 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:01:57.135389 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:01:57.135419 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:01:57.135451 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:01:57.135484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:01:57.135514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:01:57.135546 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:01:57.135582 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:01:57.135613 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:01:57.135644 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:01:57.135675 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:01:57.135782 systemd-journald[1524]: Collecting audit messages is disabled. Nov 23 23:01:57.135853 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:01:57.135890 systemd-journald[1524]: Journal started Nov 23 23:01:57.135940 systemd-journald[1524]: Runtime Journal (/run/log/journal/ec2ddc8a4d258b752b944b4ec21ea333) is 8M, max 75.3M, 67.3M free. Nov 23 23:01:56.488528 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:01:56.512920 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 23 23:01:56.514084 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:01:57.175995 kernel: loop: module loaded Nov 23 23:01:57.176104 kernel: fuse: init (API version 7.41) Nov 23 23:01:57.187648 kernel: ACPI: bus type drm_connector registered Nov 23 23:01:57.187761 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:01:57.187815 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:01:57.197782 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:01:57.238892 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:01:57.250051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:01:57.261588 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:01:57.261696 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:01:57.281884 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:01:57.297585 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:01:57.297688 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:01:57.314506 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:01:57.318101 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:01:57.322074 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:01:57.322562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:01:57.326091 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:01:57.327108 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:01:57.330690 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:01:57.334071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:01:57.339126 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:01:57.344067 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:01:57.349252 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:01:57.355198 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:01:57.414589 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:01:57.424473 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:01:57.433513 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:01:57.436850 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:01:57.445216 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:01:57.450112 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:01:57.461507 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:01:57.479131 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:01:57.486599 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:01:57.519446 kernel: loop0: detected capacity change from 0 to 100632 Nov 23 23:01:57.564803 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Nov 23 23:01:57.564844 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Nov 23 23:01:57.565628 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:01:57.567313 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:01:57.593229 systemd-journald[1524]: Time spent on flushing to /var/log/journal/ec2ddc8a4d258b752b944b4ec21ea333 is 89.584ms for 936 entries. Nov 23 23:01:57.593229 systemd-journald[1524]: System Journal (/var/log/journal/ec2ddc8a4d258b752b944b4ec21ea333) is 8M, max 195.6M, 187.6M free. Nov 23 23:01:57.718295 systemd-journald[1524]: Received client request to flush runtime journal. Nov 23 23:01:57.718384 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:01:57.718419 kernel: loop1: detected capacity change from 0 to 61264 Nov 23 23:01:57.599222 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:01:57.609339 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:01:57.619183 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:01:57.725717 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:01:57.745847 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:01:57.777085 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:01:57.785775 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:01:57.855159 kernel: loop2: detected capacity change from 0 to 207008 Nov 23 23:01:57.863807 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Nov 23 23:01:57.863855 systemd-tmpfiles[1596]: ACLs are not supported, ignoring. Nov 23 23:01:57.878149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:01:57.917011 kernel: loop3: detected capacity change from 0 to 119840 Nov 23 23:01:58.048001 kernel: loop4: detected capacity change from 0 to 100632 Nov 23 23:01:58.069010 kernel: loop5: detected capacity change from 0 to 61264 Nov 23 23:01:58.093997 kernel: loop6: detected capacity change from 0 to 207008 Nov 23 23:01:58.132010 kernel: loop7: detected capacity change from 0 to 119840 Nov 23 23:01:58.147080 (sd-merge)[1602]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 23 23:01:58.148274 (sd-merge)[1602]: Merged extensions into '/usr'. Nov 23 23:01:58.157239 systemd[1]: Reload requested from client PID 1555 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:01:58.157282 systemd[1]: Reloading... Nov 23 23:01:58.289764 zram_generator::config[1624]: No configuration found. Nov 23 23:01:58.905602 systemd[1]: Reloading finished in 747 ms. Nov 23 23:01:58.929616 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:01:58.933452 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:01:58.951306 systemd[1]: Starting ensure-sysext.service... Nov 23 23:01:58.958540 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:01:58.967550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:01:59.012234 systemd[1]: Reload requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:01:59.012274 systemd[1]: Reloading... Nov 23 23:01:59.065195 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:01:59.065782 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:01:59.066500 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:01:59.067122 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:01:59.069126 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:01:59.069797 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Nov 23 23:01:59.070071 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Nov 23 23:01:59.083363 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:01:59.083398 systemd-tmpfiles[1681]: Skipping /boot Nov 23 23:01:59.122258 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:01:59.122296 systemd-tmpfiles[1681]: Skipping /boot Nov 23 23:01:59.157031 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Nov 23 23:01:59.164275 ldconfig[1552]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:01:59.264020 zram_generator::config[1710]: No configuration found. Nov 23 23:01:59.615326 (udev-worker)[1733]: Network interface NamePolicy= disabled on kernel command line. Nov 23 23:01:59.897206 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 23 23:01:59.898780 systemd[1]: Reloading finished in 885 ms. Nov 23 23:01:59.918680 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:01:59.925111 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:01:59.954117 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:01:59.988187 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:01:59.995380 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:02:00.002151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:02:00.011385 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:02:00.022144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:02:00.030460 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:02:00.053274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:02:00.057201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:02:00.067680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:02:00.087509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:02:00.091136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:02:00.091438 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:02:00.170025 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:02:00.179177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:02:00.179597 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:02:00.179832 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:02:00.192430 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:02:00.200541 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:02:00.203377 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:02:00.203642 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:02:00.204047 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:02:00.229162 systemd[1]: Finished ensure-sysext.service. Nov 23 23:02:00.235118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:02:00.235579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:02:00.248456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:02:00.250511 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:02:00.255521 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:02:00.256201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:02:00.259876 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:02:00.267454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:02:00.269568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:02:00.327129 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:02:00.336469 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:02:00.339782 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:02:00.341716 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:02:00.443516 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:02:00.447550 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:02:00.527246 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:02:00.535801 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:02:00.554282 augenrules[1905]: No rules Nov 23 23:02:00.556755 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:02:00.563799 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:02:00.812080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:02:00.866734 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 23 23:02:00.870434 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:02:00.876497 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:02:00.951612 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:02:01.027449 systemd-networkd[1811]: lo: Link UP Nov 23 23:02:01.027479 systemd-networkd[1811]: lo: Gained carrier Nov 23 23:02:01.030721 systemd-networkd[1811]: Enumeration completed Nov 23 23:02:01.030972 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:02:01.037516 systemd-networkd[1811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:02:01.037543 systemd-networkd[1811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:02:01.038249 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:02:01.044710 systemd-networkd[1811]: eth0: Link UP Nov 23 23:02:01.045106 systemd-networkd[1811]: eth0: Gained carrier Nov 23 23:02:01.045149 systemd-networkd[1811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:02:01.048524 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:02:01.049035 systemd-resolved[1816]: Positive Trust Anchors: Nov 23 23:02:01.049577 systemd-resolved[1816]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:02:01.049795 systemd-resolved[1816]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:02:01.062057 systemd-networkd[1811]: eth0: DHCPv4 address 172.31.29.95/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 23 23:02:01.073386 systemd-resolved[1816]: Defaulting to hostname 'linux'. Nov 23 23:02:01.077268 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:02:01.080717 systemd[1]: Reached target network.target - Network. Nov 23 23:02:01.083188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:02:01.086625 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:02:01.089808 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:02:01.094258 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:02:01.097931 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:02:01.101188 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:02:01.104618 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:02:01.108185 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:02:01.108254 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:02:01.110907 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:02:01.115612 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:02:01.121895 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:02:01.131495 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:02:01.138508 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:02:01.142633 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:02:01.158457 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:02:01.162458 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:02:01.169040 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:02:01.173096 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:02:01.177092 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:02:01.179930 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:02:01.182736 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:02:01.182819 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:02:01.185149 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:02:01.190854 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 23:02:01.198100 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:02:01.208423 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:02:01.217373 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:02:01.224464 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:02:01.228204 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:02:01.231647 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:02:01.244430 systemd[1]: Started ntpd.service - Network Time Service. Nov 23 23:02:01.255376 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:02:01.266306 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 23 23:02:01.278470 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:02:01.296345 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:02:01.333789 jq[1969]: false Nov 23 23:02:01.344293 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:02:01.351134 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:02:01.352211 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:02:01.356473 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:02:01.368297 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:02:01.383055 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:02:01.387336 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:02:01.389641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:02:01.446867 update_engine[1981]: I20251123 23:02:01.446301 1981 main.cc:92] Flatcar Update Engine starting Nov 23 23:02:01.482259 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:02:01.489526 extend-filesystems[1970]: Found /dev/nvme0n1p6 Nov 23 23:02:01.485779 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:02:01.487127 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:02:01.492489 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:02:01.492991 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:02:01.527488 extend-filesystems[1970]: Found /dev/nvme0n1p9 Nov 23 23:02:01.557211 jq[1982]: true Nov 23 23:02:01.576502 tar[1988]: linux-arm64/LICENSE Nov 23 23:02:01.576502 tar[1988]: linux-arm64/helm Nov 23 23:02:01.578850 dbus-daemon[1967]: [system] SELinux support is enabled Nov 23 23:02:01.579242 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:02:01.600568 extend-filesystems[1970]: Checking size of /dev/nvme0n1p9 Nov 23 23:02:01.591217 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:02:01.591291 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:02:01.592420 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:02:01.592495 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:02:01.619032 update_engine[1981]: I20251123 23:02:01.617278 1981 update_check_scheduler.cc:74] Next update check in 6m33s Nov 23 23:02:01.612131 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1811 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 23 23:02:01.623465 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 23 23:02:01.626605 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:02:01.655247 (ntainerd)[2006]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:02:01.659925 ntpd[1972]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:17:51 UTC 2025 (1): Starting Nov 23 23:02:01.664499 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:17:51 UTC 2025 (1): Starting Nov 23 23:02:01.664499 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 23:02:01.664499 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: ---------------------------------------------------- Nov 23 23:02:01.664499 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Nov 23 23:02:01.664499 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 23:02:01.664499 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: corporation. Support and training for ntp-4 are Nov 23 23:02:01.664499 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: available at https://www.nwtime.org/support Nov 23 23:02:01.664499 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: ---------------------------------------------------- Nov 23 23:02:01.663276 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 23:02:01.663305 ntpd[1972]: ---------------------------------------------------- Nov 23 23:02:01.665722 coreos-metadata[1966]: Nov 23 23:02:01.665 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 23 23:02:01.663322 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Nov 23 23:02:01.663340 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 23:02:01.663358 ntpd[1972]: corporation. Support and training for ntp-4 are Nov 23 23:02:01.663376 ntpd[1972]: available at https://www.nwtime.org/support Nov 23 23:02:01.663392 ntpd[1972]: ---------------------------------------------------- Nov 23 23:02:01.670542 ntpd[1972]: proto: precision = 0.096 usec (-23) Nov 23 23:02:01.671151 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: proto: precision = 0.096 usec (-23) Nov 23 23:02:01.672255 ntpd[1972]: basedate set to 2025-11-11 Nov 23 23:02:01.674147 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: basedate set to 2025-11-11 Nov 23 23:02:01.674147 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: gps base set to 2025-11-16 (week 2393) Nov 23 23:02:01.674147 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 23:02:01.674147 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 23:02:01.672304 ntpd[1972]: gps base set to 2025-11-16 (week 2393) Nov 23 23:02:01.672520 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 23:02:01.672576 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 23:02:01.674929 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 23:02:01.675719 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 23:02:01.675719 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: Listen normally on 3 eth0 172.31.29.95:123 Nov 23 23:02:01.675719 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: Listen normally on 4 lo [::1]:123 Nov 23 23:02:01.675719 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: bind(21) AF_INET6 [fe80::4a5:f2ff:fee4:fbd9%2]:123 flags 0x811 failed: Cannot assign requested address Nov 23 23:02:01.675719 ntpd[1972]: 23 Nov 23:02:01 ntpd[1972]: unable to create socket on eth0 (5) for [fe80::4a5:f2ff:fee4:fbd9%2]:123 Nov 23 23:02:01.675037 ntpd[1972]: Listen normally on 3 eth0 172.31.29.95:123 Nov 23 23:02:01.680380 extend-filesystems[1970]: Resized partition /dev/nvme0n1p9 Nov 23 23:02:01.675096 ntpd[1972]: Listen normally on 4 lo [::1]:123 Nov 23 23:02:01.675150 ntpd[1972]: bind(21) AF_INET6 [fe80::4a5:f2ff:fee4:fbd9%2]:123 flags 0x811 failed: Cannot assign requested address Nov 23 23:02:01.675194 ntpd[1972]: unable to create socket on eth0 (5) for [fe80::4a5:f2ff:fee4:fbd9%2]:123 Nov 23 23:02:01.688734 systemd-coredump[2022]: Process 1972 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Nov 23 23:02:01.696494 coreos-metadata[1966]: Nov 23 23:02:01.692 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 23 23:02:01.696629 jq[2014]: true Nov 23 23:02:01.698910 coreos-metadata[1966]: Nov 23 23:02:01.697 INFO Fetch successful Nov 23 23:02:01.698910 coreos-metadata[1966]: Nov 23 23:02:01.698 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 23 23:02:01.703988 extend-filesystems[2023]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 23:02:01.706671 coreos-metadata[1966]: Nov 23 23:02:01.702 INFO Fetch successful Nov 23 23:02:01.706671 coreos-metadata[1966]: Nov 23 23:02:01.702 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 23 23:02:01.710480 coreos-metadata[1966]: Nov 23 23:02:01.708 INFO Fetch successful Nov 23 23:02:01.710480 coreos-metadata[1966]: Nov 23 23:02:01.708 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 23 23:02:01.714744 coreos-metadata[1966]: Nov 23 23:02:01.714 INFO Fetch successful Nov 23 23:02:01.714744 coreos-metadata[1966]: Nov 23 23:02:01.714 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 23 23:02:01.714490 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.718 INFO Fetch failed with 404: resource not found Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.718 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.719 INFO Fetch successful Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.719 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.720 INFO Fetch successful Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.720 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.721 INFO Fetch successful Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.722 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.724 INFO Fetch successful Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.724 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 23 23:02:01.741397 coreos-metadata[1966]: Nov 23 23:02:01.725 INFO Fetch successful Nov 23 23:02:01.760381 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 23 23:02:01.759310 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Nov 23 23:02:01.772778 systemd[1]: Started systemd-coredump@0-2022-0.service - Process Core Dump (PID 2022/UID 0). Nov 23 23:02:01.816372 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 23 23:02:01.874760 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 23:02:01.879720 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:02:01.989169 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 23 23:02:01.999019 extend-filesystems[2023]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 23 23:02:01.999019 extend-filesystems[2023]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 23 23:02:01.999019 extend-filesystems[2023]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 23 23:02:02.014451 bash[2055]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:02:02.012079 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:02:02.025142 extend-filesystems[1970]: Resized filesystem in /dev/nvme0n1p9 Nov 23 23:02:02.030508 systemd[1]: Starting sshkeys.service... Nov 23 23:02:02.040214 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:02:02.041829 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:02:02.051604 systemd-logind[1979]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 23:02:02.051664 systemd-logind[1979]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 23 23:02:02.052094 systemd-logind[1979]: New seat seat0. Nov 23 23:02:02.053777 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:02:02.243256 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 23 23:02:02.250095 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 23 23:02:02.399816 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 23 23:02:02.416130 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 23 23:02:02.422750 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2018 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 23 23:02:02.439239 systemd[1]: Starting polkit.service - Authorization Manager... Nov 23 23:02:02.541598 containerd[2006]: time="2025-11-23T23:02:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:02:02.546155 containerd[2006]: time="2025-11-23T23:02:02.542901576Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:02:02.549738 locksmithd[2019]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:02:02.699988 containerd[2006]: time="2025-11-23T23:02:02.697895785Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.408µs" Nov 23 23:02:02.704685 containerd[2006]: time="2025-11-23T23:02:02.704596993Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:02:02.704817 containerd[2006]: time="2025-11-23T23:02:02.704691253Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:02:02.705280 containerd[2006]: time="2025-11-23T23:02:02.705197305Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:02:02.705280 containerd[2006]: time="2025-11-23T23:02:02.705273913Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:02:02.705413 containerd[2006]: time="2025-11-23T23:02:02.705341689Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:02:02.705592 containerd[2006]: time="2025-11-23T23:02:02.705531145Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:02:02.705659 containerd[2006]: time="2025-11-23T23:02:02.705581833Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:02:02.708000 containerd[2006]: time="2025-11-23T23:02:02.706099525Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:02:02.708000 containerd[2006]: time="2025-11-23T23:02:02.706167481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:02:02.708000 containerd[2006]: time="2025-11-23T23:02:02.706204753Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:02:02.708000 containerd[2006]: time="2025-11-23T23:02:02.706231849Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:02:02.708000 containerd[2006]: time="2025-11-23T23:02:02.706479277Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:02:02.718751 containerd[2006]: time="2025-11-23T23:02:02.718662769Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:02:02.718907 containerd[2006]: time="2025-11-23T23:02:02.718817125Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:02:02.718907 containerd[2006]: time="2025-11-23T23:02:02.718852633Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:02:02.719037 containerd[2006]: time="2025-11-23T23:02:02.718927093Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:02:02.719533 containerd[2006]: time="2025-11-23T23:02:02.719455045Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:02:02.719813 containerd[2006]: time="2025-11-23T23:02:02.719683729Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.741774949Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.741935089Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742059361Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742098745Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742159741Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742195141Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742227733Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742260877Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742293349Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742325413Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742356529Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742390153Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742690285Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:02:02.745996 containerd[2006]: time="2025-11-23T23:02:02.742749313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:02:02.746698 containerd[2006]: time="2025-11-23T23:02:02.742788121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:02:02.746698 containerd[2006]: time="2025-11-23T23:02:02.742817749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:02:02.746698 containerd[2006]: time="2025-11-23T23:02:02.742845685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:02:02.746698 containerd[2006]: time="2025-11-23T23:02:02.742875601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:02:02.746698 containerd[2006]: time="2025-11-23T23:02:02.742932601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:02:02.746698 containerd[2006]: time="2025-11-23T23:02:02.744719161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:02:02.746698 containerd[2006]: time="2025-11-23T23:02:02.744774493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:02:02.746698 containerd[2006]: time="2025-11-23T23:02:02.744806773Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:02:02.746698 containerd[2006]: time="2025-11-23T23:02:02.744841081Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:02:02.749422 containerd[2006]: time="2025-11-23T23:02:02.749317201Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:02:02.749422 containerd[2006]: time="2025-11-23T23:02:02.749415301Z" level=info msg="Start snapshots syncer" Nov 23 23:02:02.749621 containerd[2006]: time="2025-11-23T23:02:02.749501389Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:02:02.753989 containerd[2006]: time="2025-11-23T23:02:02.752121169Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:02:02.753989 containerd[2006]: time="2025-11-23T23:02:02.752273041Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:02:02.754336 containerd[2006]: time="2025-11-23T23:02:02.752393269Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:02:02.754336 containerd[2006]: time="2025-11-23T23:02:02.752686621Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:02:02.754336 containerd[2006]: time="2025-11-23T23:02:02.752746309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:02:02.754336 containerd[2006]: time="2025-11-23T23:02:02.752779117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:02:02.754336 containerd[2006]: time="2025-11-23T23:02:02.752812957Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:02:02.754336 containerd[2006]: time="2025-11-23T23:02:02.752847109Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:02:02.754336 containerd[2006]: time="2025-11-23T23:02:02.752892085Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:02:02.754336 containerd[2006]: time="2025-11-23T23:02:02.752922949Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764253049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764342125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764379241Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764490337Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764642209Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764674441Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764711389Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764734645Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764763157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.764794909Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.765015709Z" level=info msg="runtime interface created" Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.765043201Z" level=info msg="created NRI interface" Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.765071353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.765109393Z" level=info msg="Connect containerd service" Nov 23 23:02:02.768992 containerd[2006]: time="2025-11-23T23:02:02.765192565Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:02:02.777719 containerd[2006]: time="2025-11-23T23:02:02.772926037Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:02:02.805742 coreos-metadata[2089]: Nov 23 23:02:02.805 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 23 23:02:02.807645 coreos-metadata[2089]: Nov 23 23:02:02.807 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 23 23:02:02.817669 coreos-metadata[2089]: Nov 23 23:02:02.816 INFO Fetch successful Nov 23 23:02:02.817669 coreos-metadata[2089]: Nov 23 23:02:02.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 23 23:02:02.819232 coreos-metadata[2089]: Nov 23 23:02:02.819 INFO Fetch successful Nov 23 23:02:02.829232 unknown[2089]: wrote ssh authorized keys file for user: core Nov 23 23:02:02.958113 update-ssh-keys[2159]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:02:02.954689 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 23 23:02:02.972721 systemd[1]: Finished sshkeys.service. Nov 23 23:02:03.021832 systemd-coredump[2031]: Process 1972 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1972: #0 0x0000aaaac5e50b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaac5dffe60 n/a (ntpd + 0xfe60) #2 0x0000aaaac5e00240 n/a (ntpd + 0x10240) #3 0x0000aaaac5dfbe14 n/a (ntpd + 0xbe14) #4 0x0000aaaac5dfd3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaac5e05a38 n/a (ntpd + 0x15a38) #6 0x0000aaaac5df738c n/a (ntpd + 0x738c) #7 0x0000ffff86142034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff86142118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaac5df73f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Nov 23 23:02:03.031178 systemd-networkd[1811]: eth0: Gained IPv6LL Nov 23 23:02:03.033245 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Nov 23 23:02:03.033599 systemd[1]: ntpd.service: Failed with result 'core-dump'. Nov 23 23:02:03.064359 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:02:03.070864 systemd[1]: systemd-coredump@0-2022-0.service: Deactivated successfully. Nov 23 23:02:03.094797 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:02:03.106926 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 23 23:02:03.119597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:02:03.132835 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:02:03.140447 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Nov 23 23:02:03.148701 systemd[1]: Started ntpd.service - Network Time Service. Nov 23 23:02:03.400000 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:02:03.424644 ntpd[2187]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:17:51 UTC 2025 (1): Starting Nov 23 23:02:03.424796 ntpd[2187]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 23:02:03.425418 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: ntpd 4.2.8p18@1.4062-o Sun Nov 23 20:17:51 UTC 2025 (1): Starting Nov 23 23:02:03.425418 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 23 23:02:03.425418 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: ---------------------------------------------------- Nov 23 23:02:03.425418 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: ntp-4 is maintained by Network Time Foundation, Nov 23 23:02:03.425418 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 23:02:03.425418 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: corporation. Support and training for ntp-4 are Nov 23 23:02:03.425418 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: available at https://www.nwtime.org/support Nov 23 23:02:03.425418 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: ---------------------------------------------------- Nov 23 23:02:03.424816 ntpd[2187]: ---------------------------------------------------- Nov 23 23:02:03.424835 ntpd[2187]: ntp-4 is maintained by Network Time Foundation, Nov 23 23:02:03.424853 ntpd[2187]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 23 23:02:03.424869 ntpd[2187]: corporation. Support and training for ntp-4 are Nov 23 23:02:03.424886 ntpd[2187]: available at https://www.nwtime.org/support Nov 23 23:02:03.424903 ntpd[2187]: ---------------------------------------------------- Nov 23 23:02:03.436089 containerd[2006]: time="2025-11-23T23:02:03.435668689Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:02:03.436089 containerd[2006]: time="2025-11-23T23:02:03.435915373Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:02:03.436540 containerd[2006]: time="2025-11-23T23:02:03.436023649Z" level=info msg="Start subscribing containerd event" Nov 23 23:02:03.436738 ntpd[2187]: proto: precision = 0.096 usec (-23) Nov 23 23:02:03.438536 containerd[2006]: time="2025-11-23T23:02:03.436500277Z" level=info msg="Start recovering state" Nov 23 23:02:03.438536 containerd[2006]: time="2025-11-23T23:02:03.438272473Z" level=info msg="Start event monitor" Nov 23 23:02:03.438671 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: proto: precision = 0.096 usec (-23) Nov 23 23:02:03.439436 containerd[2006]: time="2025-11-23T23:02:03.439192897Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:02:03.439436 containerd[2006]: time="2025-11-23T23:02:03.439268329Z" level=info msg="Start streaming server" Nov 23 23:02:03.439436 containerd[2006]: time="2025-11-23T23:02:03.439307077Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:02:03.439436 containerd[2006]: time="2025-11-23T23:02:03.439356301Z" level=info msg="runtime interface starting up..." Nov 23 23:02:03.439436 containerd[2006]: time="2025-11-23T23:02:03.439376857Z" level=info msg="starting plugins..." Nov 23 23:02:03.439874 containerd[2006]: time="2025-11-23T23:02:03.439806817Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:02:03.444642 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:02:03.447142 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: basedate set to 2025-11-11 Nov 23 23:02:03.447142 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: gps base set to 2025-11-16 (week 2393) Nov 23 23:02:03.447142 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 23:02:03.444697 ntpd[2187]: basedate set to 2025-11-11 Nov 23 23:02:03.444730 ntpd[2187]: gps base set to 2025-11-16 (week 2393) Nov 23 23:02:03.444894 ntpd[2187]: Listen and drop on 0 v6wildcard [::]:123 Nov 23 23:02:03.444984 ntpd[2187]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 23:02:03.450404 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 23 23:02:03.450404 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 23:02:03.450404 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: Listen normally on 3 eth0 172.31.29.95:123 Nov 23 23:02:03.450404 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: Listen normally on 4 lo [::1]:123 Nov 23 23:02:03.450404 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: Listen normally on 5 eth0 [fe80::4a5:f2ff:fee4:fbd9%2]:123 Nov 23 23:02:03.450404 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: Listening on routing socket on fd #22 for interface updates Nov 23 23:02:03.449888 ntpd[2187]: Listen normally on 2 lo 127.0.0.1:123 Nov 23 23:02:03.449976 ntpd[2187]: Listen normally on 3 eth0 172.31.29.95:123 Nov 23 23:02:03.450040 ntpd[2187]: Listen normally on 4 lo [::1]:123 Nov 23 23:02:03.450089 ntpd[2187]: Listen normally on 5 eth0 [fe80::4a5:f2ff:fee4:fbd9%2]:123 Nov 23 23:02:03.450133 ntpd[2187]: Listening on routing socket on fd #22 for interface updates Nov 23 23:02:03.460124 containerd[2006]: time="2025-11-23T23:02:03.460039117Z" level=info msg="containerd successfully booted in 0.920568s" Nov 23 23:02:03.494224 ntpd[2187]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 23:02:03.498209 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 23:02:03.498209 ntpd[2187]: 23 Nov 23:02:03 ntpd[2187]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 23:02:03.494296 ntpd[2187]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 23 23:02:03.494904 polkitd[2119]: Started polkitd version 126 Nov 23 23:02:03.536333 polkitd[2119]: Loading rules from directory /etc/polkit-1/rules.d Nov 23 23:02:03.537805 polkitd[2119]: Loading rules from directory /run/polkit-1/rules.d Nov 23 23:02:03.542283 polkitd[2119]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 23 23:02:03.543109 polkitd[2119]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 23 23:02:03.543197 polkitd[2119]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 23 23:02:03.543299 polkitd[2119]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 23 23:02:03.548159 polkitd[2119]: Finished loading, compiling and executing 2 rules Nov 23 23:02:03.550459 systemd[1]: Started polkit.service - Authorization Manager. Nov 23 23:02:03.559572 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 23 23:02:03.563081 polkitd[2119]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 23 23:02:03.617626 amazon-ssm-agent[2180]: Initializing new seelog logger Nov 23 23:02:03.617626 amazon-ssm-agent[2180]: New Seelog Logger Creation Complete Nov 23 23:02:03.617626 amazon-ssm-agent[2180]: 2025/11/23 23:02:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:03.617626 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:03.617626 amazon-ssm-agent[2180]: 2025/11/23 23:02:03 processing appconfig overrides Nov 23 23:02:03.619106 systemd-hostnamed[2018]: Hostname set to (transient) Nov 23 23:02:03.620612 systemd-resolved[1816]: System hostname changed to 'ip-172-31-29-95'. Nov 23 23:02:03.622763 amazon-ssm-agent[2180]: 2025/11/23 23:02:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:03.622883 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:03.623239 amazon-ssm-agent[2180]: 2025/11/23 23:02:03 processing appconfig overrides Nov 23 23:02:03.623823 amazon-ssm-agent[2180]: 2025/11/23 23:02:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:03.624002 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:03.624277 amazon-ssm-agent[2180]: 2025/11/23 23:02:03 processing appconfig overrides Nov 23 23:02:03.625698 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.6218 INFO Proxy environment variables: Nov 23 23:02:03.631604 amazon-ssm-agent[2180]: 2025/11/23 23:02:03 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:03.631604 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:03.631604 amazon-ssm-agent[2180]: 2025/11/23 23:02:03 processing appconfig overrides Nov 23 23:02:03.746849 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.6218 INFO https_proxy: Nov 23 23:02:03.850230 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.6219 INFO http_proxy: Nov 23 23:02:03.946178 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.6219 INFO no_proxy: Nov 23 23:02:04.045212 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.6234 INFO Checking if agent identity type OnPrem can be assumed Nov 23 23:02:04.059660 amazon-ssm-agent[2180]: 2025/11/23 23:02:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:04.059660 amazon-ssm-agent[2180]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 23 23:02:04.062233 amazon-ssm-agent[2180]: 2025/11/23 23:02:04 processing appconfig overrides Nov 23 23:02:04.119895 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.6235 INFO Checking if agent identity type EC2 can be assumed Nov 23 23:02:04.119895 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.7580 INFO Agent will take identity from EC2 Nov 23 23:02:04.119895 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.7627 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 23 23:02:04.119895 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.7628 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 23 23:02:04.120273 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.7628 INFO [amazon-ssm-agent] Starting Core Agent Nov 23 23:02:04.120979 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.7628 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 23 23:02:04.120979 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.7628 INFO [Registrar] Starting registrar module Nov 23 23:02:04.120979 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.7660 INFO [EC2Identity] Checking disk for registration info Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.7660 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:03.7660 INFO [EC2Identity] Generating registration keypair Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:04.0069 INFO [EC2Identity] Checking write access before registering Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:04.0117 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:04.0590 INFO [EC2Identity] EC2 registration was successful. Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:04.0591 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:04.0593 INFO [CredentialRefresher] credentialRefresher has started Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:04.0593 INFO [CredentialRefresher] Starting credentials refresher loop Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:04.1183 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 23 23:02:04.123341 amazon-ssm-agent[2180]: 2025-11-23 23:02:04.1186 INFO [CredentialRefresher] Credentials ready Nov 23 23:02:04.143753 amazon-ssm-agent[2180]: 2025-11-23 23:02:04.1269 INFO [CredentialRefresher] Next credential rotation will be in 29.999855767 minutes Nov 23 23:02:04.164656 tar[1988]: linux-arm64/README.md Nov 23 23:02:04.172914 sshd_keygen[1995]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:02:04.214166 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:02:04.236661 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:02:04.246545 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:02:04.255840 systemd[1]: Started sshd@0-172.31.29.95:22-139.178.89.65:41460.service - OpenSSH per-connection server daemon (139.178.89.65:41460). Nov 23 23:02:04.290653 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:02:04.291541 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:02:04.302761 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:02:04.361352 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:02:04.371070 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:02:04.384431 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 23 23:02:04.387557 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:02:04.572911 sshd[2235]: Accepted publickey for core from 139.178.89.65 port 41460 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:02:04.578498 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:02:04.596902 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:02:04.603468 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:02:04.642053 systemd-logind[1979]: New session 1 of user core. Nov 23 23:02:04.666066 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:02:04.678551 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:02:04.705544 (systemd)[2247]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:02:04.715355 systemd-logind[1979]: New session c1 of user core. Nov 23 23:02:05.025007 systemd[2247]: Queued start job for default target default.target. Nov 23 23:02:05.045158 systemd[2247]: Created slice app.slice - User Application Slice. Nov 23 23:02:05.045296 systemd[2247]: Reached target paths.target - Paths. Nov 23 23:02:05.045397 systemd[2247]: Reached target timers.target - Timers. Nov 23 23:02:05.048510 systemd[2247]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:02:05.079376 systemd[2247]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:02:05.080086 systemd[2247]: Reached target sockets.target - Sockets. Nov 23 23:02:05.081202 systemd[2247]: Reached target basic.target - Basic System. Nov 23 23:02:05.081331 systemd[2247]: Reached target default.target - Main User Target. Nov 23 23:02:05.081402 systemd[2247]: Startup finished in 348ms. Nov 23 23:02:05.081576 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:02:05.091278 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:02:05.154727 amazon-ssm-agent[2180]: 2025-11-23 23:02:05.1542 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 23 23:02:05.256676 amazon-ssm-agent[2180]: 2025-11-23 23:02:05.1585 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2258) started Nov 23 23:02:05.267293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:02:05.277606 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:02:05.298823 systemd[1]: Started sshd@1-172.31.29.95:22-139.178.89.65:41464.service - OpenSSH per-connection server daemon (139.178.89.65:41464). Nov 23 23:02:05.304542 systemd[1]: Startup finished in 3.747s (kernel) + 9.331s (initrd) + 10.345s (userspace) = 23.424s. Nov 23 23:02:05.331635 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:02:05.361592 amazon-ssm-agent[2180]: 2025-11-23 23:02:05.1586 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 23 23:02:05.568334 sshd[2270]: Accepted publickey for core from 139.178.89.65 port 41464 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:02:05.571133 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:02:05.589298 systemd-logind[1979]: New session 2 of user core. Nov 23 23:02:05.594401 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:02:05.735068 sshd[2289]: Connection closed by 139.178.89.65 port 41464 Nov 23 23:02:05.736327 sshd-session[2270]: pam_unix(sshd:session): session closed for user core Nov 23 23:02:05.746660 systemd[1]: sshd@1-172.31.29.95:22-139.178.89.65:41464.service: Deactivated successfully. Nov 23 23:02:05.754545 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 23:02:05.759373 systemd-logind[1979]: Session 2 logged out. Waiting for processes to exit. Nov 23 23:02:05.779740 systemd[1]: Started sshd@2-172.31.29.95:22-139.178.89.65:41472.service - OpenSSH per-connection server daemon (139.178.89.65:41472). Nov 23 23:02:05.788097 systemd-logind[1979]: Removed session 2. Nov 23 23:02:06.037151 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 41472 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:02:06.040472 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:02:06.052035 systemd-logind[1979]: New session 3 of user core. Nov 23 23:02:06.066713 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:02:06.196485 sshd[2298]: Connection closed by 139.178.89.65 port 41472 Nov 23 23:02:06.198320 sshd-session[2295]: pam_unix(sshd:session): session closed for user core Nov 23 23:02:06.210125 systemd[1]: sshd@2-172.31.29.95:22-139.178.89.65:41472.service: Deactivated successfully. Nov 23 23:02:06.217387 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 23:02:06.222993 systemd-logind[1979]: Session 3 logged out. Waiting for processes to exit. Nov 23 23:02:06.242426 systemd[1]: Started sshd@3-172.31.29.95:22-139.178.89.65:41474.service - OpenSSH per-connection server daemon (139.178.89.65:41474). Nov 23 23:02:06.247621 systemd-logind[1979]: Removed session 3. Nov 23 23:02:06.416169 kubelet[2268]: E1123 23:02:06.415172 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:02:06.421446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:02:06.422375 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:02:06.424185 systemd[1]: kubelet.service: Consumed 1.565s CPU time, 257.6M memory peak. Nov 23 23:02:06.459837 sshd[2304]: Accepted publickey for core from 139.178.89.65 port 41474 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:02:06.462483 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:02:06.472059 systemd-logind[1979]: New session 4 of user core. Nov 23 23:02:06.485310 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:02:06.620877 sshd[2309]: Connection closed by 139.178.89.65 port 41474 Nov 23 23:02:06.619784 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Nov 23 23:02:06.627879 systemd[1]: sshd@3-172.31.29.95:22-139.178.89.65:41474.service: Deactivated successfully. Nov 23 23:02:06.633548 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:02:06.637192 systemd-logind[1979]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:02:06.641539 systemd-logind[1979]: Removed session 4. Nov 23 23:02:06.659612 systemd[1]: Started sshd@4-172.31.29.95:22-139.178.89.65:41480.service - OpenSSH per-connection server daemon (139.178.89.65:41480). Nov 23 23:02:06.883622 sshd[2315]: Accepted publickey for core from 139.178.89.65 port 41480 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:02:06.886301 sshd-session[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:02:06.898078 systemd-logind[1979]: New session 5 of user core. Nov 23 23:02:06.905366 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:02:07.043618 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:02:07.044318 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:02:07.059916 sudo[2319]: pam_unix(sudo:session): session closed for user root Nov 23 23:02:07.085258 sshd[2318]: Connection closed by 139.178.89.65 port 41480 Nov 23 23:02:07.086703 sshd-session[2315]: pam_unix(sshd:session): session closed for user core Nov 23 23:02:07.094661 systemd[1]: sshd@4-172.31.29.95:22-139.178.89.65:41480.service: Deactivated successfully. Nov 23 23:02:07.098324 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:02:07.104416 systemd-logind[1979]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:02:07.123214 systemd-logind[1979]: Removed session 5. Nov 23 23:02:07.125579 systemd[1]: Started sshd@5-172.31.29.95:22-139.178.89.65:41486.service - OpenSSH per-connection server daemon (139.178.89.65:41486). Nov 23 23:02:07.333913 sshd[2325]: Accepted publickey for core from 139.178.89.65 port 41486 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:02:07.336510 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:02:07.345243 systemd-logind[1979]: New session 6 of user core. Nov 23 23:02:07.364352 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:02:07.473275 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:02:07.474340 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:02:07.487484 sudo[2330]: pam_unix(sudo:session): session closed for user root Nov 23 23:02:07.500701 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:02:07.502345 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:02:07.524283 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:02:07.605809 augenrules[2352]: No rules Nov 23 23:02:07.609528 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:02:07.610275 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:02:07.614363 sudo[2329]: pam_unix(sudo:session): session closed for user root Nov 23 23:02:07.638490 sshd[2328]: Connection closed by 139.178.89.65 port 41486 Nov 23 23:02:07.640478 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Nov 23 23:02:07.650091 systemd[1]: sshd@5-172.31.29.95:22-139.178.89.65:41486.service: Deactivated successfully. Nov 23 23:02:07.654292 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:02:07.657230 systemd-logind[1979]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:02:07.677718 systemd-logind[1979]: Removed session 6. Nov 23 23:02:07.678574 systemd[1]: Started sshd@6-172.31.29.95:22-139.178.89.65:41494.service - OpenSSH per-connection server daemon (139.178.89.65:41494). Nov 23 23:02:07.886753 sshd[2361]: Accepted publickey for core from 139.178.89.65 port 41494 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:02:07.890188 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:02:07.900491 systemd-logind[1979]: New session 7 of user core. Nov 23 23:02:07.914324 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:02:08.022365 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:02:08.023817 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:02:08.648461 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:02:08.680716 (dockerd)[2382]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:02:09.072046 dockerd[2382]: time="2025-11-23T23:02:09.071831921Z" level=info msg="Starting up" Nov 23 23:02:09.074120 dockerd[2382]: time="2025-11-23T23:02:09.073893605Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:02:09.096505 dockerd[2382]: time="2025-11-23T23:02:09.096441089Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:02:09.219294 dockerd[2382]: time="2025-11-23T23:02:09.219228449Z" level=info msg="Loading containers: start." Nov 23 23:02:09.238671 kernel: Initializing XFRM netlink socket Nov 23 23:02:09.605655 (udev-worker)[2403]: Network interface NamePolicy= disabled on kernel command line. Nov 23 23:02:09.683520 systemd-networkd[1811]: docker0: Link UP Nov 23 23:02:09.696795 dockerd[2382]: time="2025-11-23T23:02:09.696693764Z" level=info msg="Loading containers: done." Nov 23 23:02:09.726817 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2766477479-merged.mount: Deactivated successfully. Nov 23 23:02:09.732538 dockerd[2382]: time="2025-11-23T23:02:09.732470312Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:02:09.732732 dockerd[2382]: time="2025-11-23T23:02:09.732630704Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:02:09.732847 dockerd[2382]: time="2025-11-23T23:02:09.732801572Z" level=info msg="Initializing buildkit" Nov 23 23:02:09.805854 dockerd[2382]: time="2025-11-23T23:02:09.805716740Z" level=info msg="Completed buildkit initialization" Nov 23 23:02:09.823446 dockerd[2382]: time="2025-11-23T23:02:09.823203836Z" level=info msg="Daemon has completed initialization" Nov 23 23:02:09.823593 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:02:09.824297 dockerd[2382]: time="2025-11-23T23:02:09.823854956Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:02:10.887515 systemd-resolved[1816]: Clock change detected. Flushing caches. Nov 23 23:02:11.390946 containerd[2006]: time="2025-11-23T23:02:11.390887274Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 23 23:02:12.048281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480950218.mount: Deactivated successfully. Nov 23 23:02:13.579198 containerd[2006]: time="2025-11-23T23:02:13.578462949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:13.580894 containerd[2006]: time="2025-11-23T23:02:13.580824789Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431959" Nov 23 23:02:13.583726 containerd[2006]: time="2025-11-23T23:02:13.583639269Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:13.596152 containerd[2006]: time="2025-11-23T23:02:13.594699969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:13.598890 containerd[2006]: time="2025-11-23T23:02:13.598798653Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 2.207841683s" Nov 23 23:02:13.599032 containerd[2006]: time="2025-11-23T23:02:13.598885461Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Nov 23 23:02:13.599922 containerd[2006]: time="2025-11-23T23:02:13.599870649Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 23 23:02:15.174164 containerd[2006]: time="2025-11-23T23:02:15.173451717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:15.175951 containerd[2006]: time="2025-11-23T23:02:15.175904433Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618955" Nov 23 23:02:15.178180 containerd[2006]: time="2025-11-23T23:02:15.178092525Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:15.183785 containerd[2006]: time="2025-11-23T23:02:15.183699597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:15.185779 containerd[2006]: time="2025-11-23T23:02:15.185544225Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.585611704s" Nov 23 23:02:15.185779 containerd[2006]: time="2025-11-23T23:02:15.185605329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Nov 23 23:02:15.186601 containerd[2006]: time="2025-11-23T23:02:15.186276525Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 23 23:02:16.688187 containerd[2006]: time="2025-11-23T23:02:16.687736140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:16.696284 containerd[2006]: time="2025-11-23T23:02:16.693846552Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618436" Nov 23 23:02:16.701167 containerd[2006]: time="2025-11-23T23:02:16.701095284Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:16.711514 containerd[2006]: time="2025-11-23T23:02:16.711427668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:16.714371 containerd[2006]: time="2025-11-23T23:02:16.714271584Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.527929299s" Nov 23 23:02:16.714371 containerd[2006]: time="2025-11-23T23:02:16.714361296Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Nov 23 23:02:16.715157 containerd[2006]: time="2025-11-23T23:02:16.715049100Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 23 23:02:17.122771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:02:17.126344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:02:17.571393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:02:17.590048 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:02:17.686044 kubelet[2670]: E1123 23:02:17.685958 2670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:02:17.698261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:02:17.698584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:02:17.701305 systemd[1]: kubelet.service: Consumed 361ms CPU time, 107.5M memory peak. Nov 23 23:02:18.169421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618926099.mount: Deactivated successfully. Nov 23 23:02:18.788735 containerd[2006]: time="2025-11-23T23:02:18.788667759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:18.790798 containerd[2006]: time="2025-11-23T23:02:18.790732023Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561799" Nov 23 23:02:18.793275 containerd[2006]: time="2025-11-23T23:02:18.793176219Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:18.798898 containerd[2006]: time="2025-11-23T23:02:18.798770811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:18.800347 containerd[2006]: time="2025-11-23T23:02:18.800101491Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 2.084987843s" Nov 23 23:02:18.800347 containerd[2006]: time="2025-11-23T23:02:18.800187051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Nov 23 23:02:18.801214 containerd[2006]: time="2025-11-23T23:02:18.801151479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 23 23:02:19.425865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318733583.mount: Deactivated successfully. Nov 23 23:02:20.653993 containerd[2006]: time="2025-11-23T23:02:20.653922952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:20.655917 containerd[2006]: time="2025-11-23T23:02:20.655815532Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Nov 23 23:02:20.657163 containerd[2006]: time="2025-11-23T23:02:20.656711080Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:20.662306 containerd[2006]: time="2025-11-23T23:02:20.662230312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:20.666138 containerd[2006]: time="2025-11-23T23:02:20.666062056Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.864846041s" Nov 23 23:02:20.666656 containerd[2006]: time="2025-11-23T23:02:20.666141412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 23 23:02:20.666970 containerd[2006]: time="2025-11-23T23:02:20.666909136Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 23:02:21.219846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807522072.mount: Deactivated successfully. Nov 23 23:02:21.234666 containerd[2006]: time="2025-11-23T23:02:21.233405223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:02:21.235294 containerd[2006]: time="2025-11-23T23:02:21.235255491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 23 23:02:21.238197 containerd[2006]: time="2025-11-23T23:02:21.238156407Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:02:21.242618 containerd[2006]: time="2025-11-23T23:02:21.242568315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:02:21.244071 containerd[2006]: time="2025-11-23T23:02:21.244028559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 577.062879ms" Nov 23 23:02:21.244271 containerd[2006]: time="2025-11-23T23:02:21.244241523Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 23:02:21.245340 containerd[2006]: time="2025-11-23T23:02:21.245279703Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 23 23:02:21.827739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320486133.mount: Deactivated successfully. Nov 23 23:02:24.186295 containerd[2006]: time="2025-11-23T23:02:24.186213222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:24.188280 containerd[2006]: time="2025-11-23T23:02:24.188196174Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Nov 23 23:02:24.192172 containerd[2006]: time="2025-11-23T23:02:24.190888674Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:24.198653 containerd[2006]: time="2025-11-23T23:02:24.197143530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:24.199459 containerd[2006]: time="2025-11-23T23:02:24.199402266Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.954065779s" Nov 23 23:02:24.199608 containerd[2006]: time="2025-11-23T23:02:24.199579626Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 23 23:02:27.949188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:02:27.953485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:02:28.290354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:02:28.302579 (kubelet)[2820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:02:28.377632 kubelet[2820]: E1123 23:02:28.377573 2820 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:02:28.382629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:02:28.383084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:02:28.383808 systemd[1]: kubelet.service: Consumed 295ms CPU time, 107M memory peak. Nov 23 23:02:30.945900 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:02:30.946891 systemd[1]: kubelet.service: Consumed 295ms CPU time, 107M memory peak. Nov 23 23:02:30.950986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:02:31.000853 systemd[1]: Reload requested from client PID 2834 ('systemctl') (unit session-7.scope)... Nov 23 23:02:31.001067 systemd[1]: Reloading... Nov 23 23:02:31.277306 zram_generator::config[2882]: No configuration found. Nov 23 23:02:31.731199 systemd[1]: Reloading finished in 729 ms. Nov 23 23:02:31.818141 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:02:31.822749 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:02:31.823196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:02:31.823273 systemd[1]: kubelet.service: Consumed 244ms CPU time, 95M memory peak. Nov 23 23:02:31.827065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:02:32.367722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:02:32.384676 (kubelet)[2944]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:02:32.459857 kubelet[2944]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:02:32.459857 kubelet[2944]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:02:32.459857 kubelet[2944]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:02:32.460443 kubelet[2944]: I1123 23:02:32.459948 2944 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:02:33.873167 kubelet[2944]: I1123 23:02:33.872245 2944 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:02:33.873167 kubelet[2944]: I1123 23:02:33.872292 2944 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:02:33.873167 kubelet[2944]: I1123 23:02:33.872736 2944 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:02:33.919011 kubelet[2944]: E1123 23:02:33.918950 2944 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.95:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:02:33.925110 kubelet[2944]: I1123 23:02:33.925064 2944 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:02:33.938913 kubelet[2944]: I1123 23:02:33.938881 2944 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:02:33.945157 kubelet[2944]: I1123 23:02:33.945081 2944 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:02:33.946662 kubelet[2944]: I1123 23:02:33.946581 2944 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:02:33.946973 kubelet[2944]: I1123 23:02:33.946650 2944 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:02:33.947261 kubelet[2944]: I1123 23:02:33.947130 2944 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:02:33.947261 kubelet[2944]: I1123 23:02:33.947153 2944 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:02:33.947543 kubelet[2944]: I1123 23:02:33.947497 2944 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:02:33.953391 kubelet[2944]: I1123 23:02:33.953209 2944 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:02:33.953391 kubelet[2944]: I1123 23:02:33.953258 2944 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:02:33.953391 kubelet[2944]: I1123 23:02:33.953306 2944 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:02:33.953391 kubelet[2944]: I1123 23:02:33.953330 2944 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:02:33.959993 kubelet[2944]: W1123 23:02:33.959913 2944 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-95&limit=500&resourceVersion=0": dial tcp 172.31.29.95:6443: connect: connection refused Nov 23 23:02:33.960234 kubelet[2944]: E1123 23:02:33.960203 2944 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-95&limit=500&resourceVersion=0\": dial tcp 172.31.29.95:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:02:33.960745 kubelet[2944]: I1123 23:02:33.960718 2944 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:02:33.961858 kubelet[2944]: I1123 23:02:33.961820 2944 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:02:33.962256 kubelet[2944]: W1123 23:02:33.962235 2944 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:02:33.965745 kubelet[2944]: I1123 23:02:33.965709 2944 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:02:33.965927 kubelet[2944]: I1123 23:02:33.965910 2944 server.go:1287] "Started kubelet" Nov 23 23:02:33.974027 kubelet[2944]: E1123 23:02:33.973545 2944 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.95:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.95:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-95.187ac522049c368a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-95,UID:ip-172-31-29-95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-95,},FirstTimestamp:2025-11-23 23:02:33.965876874 +0000 UTC m=+1.575457665,LastTimestamp:2025-11-23 23:02:33.965876874 +0000 UTC m=+1.575457665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-95,}" Nov 23 23:02:33.974274 kubelet[2944]: W1123 23:02:33.974211 2944 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.95:6443: connect: connection refused Nov 23 23:02:33.974356 kubelet[2944]: E1123 23:02:33.974295 2944 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.95:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:02:33.975149 kubelet[2944]: I1123 23:02:33.974374 2944 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:02:33.975272 kubelet[2944]: I1123 23:02:33.975186 2944 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:02:33.975545 kubelet[2944]: I1123 23:02:33.975461 2944 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:02:33.976711 kubelet[2944]: I1123 23:02:33.976676 2944 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:02:33.977744 kubelet[2944]: I1123 23:02:33.977689 2944 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:02:33.982339 kubelet[2944]: I1123 23:02:33.982284 2944 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:02:33.989053 kubelet[2944]: E1123 23:02:33.988801 2944 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-95\" not found" Nov 23 23:02:33.989053 kubelet[2944]: I1123 23:02:33.988949 2944 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:02:33.989453 kubelet[2944]: I1123 23:02:33.989417 2944 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:02:33.989791 kubelet[2944]: I1123 23:02:33.989530 2944 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:02:33.991166 kubelet[2944]: E1123 23:02:33.990559 2944 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:02:33.991166 kubelet[2944]: W1123 23:02:33.990967 2944 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.95:6443: connect: connection refused Nov 23 23:02:33.991395 kubelet[2944]: E1123 23:02:33.991172 2944 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.95:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:02:33.992234 kubelet[2944]: E1123 23:02:33.992151 2944 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-95?timeout=10s\": dial tcp 172.31.29.95:6443: connect: connection refused" interval="200ms" Nov 23 23:02:33.993138 kubelet[2944]: I1123 23:02:33.992740 2944 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:02:33.993138 kubelet[2944]: I1123 23:02:33.992905 2944 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:02:33.996505 kubelet[2944]: I1123 23:02:33.996445 2944 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:02:34.037529 kubelet[2944]: I1123 23:02:34.036992 2944 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:02:34.037529 kubelet[2944]: I1123 23:02:34.037028 2944 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:02:34.037529 kubelet[2944]: I1123 23:02:34.037058 2944 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:02:34.039678 kubelet[2944]: I1123 23:02:34.039618 2944 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:02:34.041214 kubelet[2944]: I1123 23:02:34.041181 2944 policy_none.go:49] "None policy: Start" Nov 23 23:02:34.041766 kubelet[2944]: I1123 23:02:34.041357 2944 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:02:34.041766 kubelet[2944]: I1123 23:02:34.041388 2944 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:02:34.042028 kubelet[2944]: I1123 23:02:34.042000 2944 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:02:34.042167 kubelet[2944]: I1123 23:02:34.042148 2944 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:02:34.042337 kubelet[2944]: I1123 23:02:34.042315 2944 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:02:34.042560 kubelet[2944]: I1123 23:02:34.042539 2944 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:02:34.042719 kubelet[2944]: E1123 23:02:34.042691 2944 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:02:34.051847 kubelet[2944]: W1123 23:02:34.051643 2944 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.95:6443: connect: connection refused Nov 23 23:02:34.051847 kubelet[2944]: E1123 23:02:34.051712 2944 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.95:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:02:34.058380 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:02:34.080714 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:02:34.088137 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:02:34.089261 kubelet[2944]: E1123 23:02:34.089207 2944 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-95\" not found" Nov 23 23:02:34.097575 kubelet[2944]: I1123 23:02:34.096847 2944 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:02:34.097575 kubelet[2944]: I1123 23:02:34.097158 2944 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:02:34.097575 kubelet[2944]: I1123 23:02:34.097179 2944 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:02:34.097090 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 23 23:02:34.100150 kubelet[2944]: I1123 23:02:34.098558 2944 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:02:34.108524 kubelet[2944]: E1123 23:02:34.108284 2944 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:02:34.109030 kubelet[2944]: E1123 23:02:34.108987 2944 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-95\" not found" Nov 23 23:02:34.166866 systemd[1]: Created slice kubepods-burstable-pod040b095071f4af269f8ca9b5e6cc31e7.slice - libcontainer container kubepods-burstable-pod040b095071f4af269f8ca9b5e6cc31e7.slice. Nov 23 23:02:34.179168 kubelet[2944]: E1123 23:02:34.178278 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:34.184792 systemd[1]: Created slice kubepods-burstable-pod04f3da31e7503f3278e4440804cb7aeb.slice - libcontainer container kubepods-burstable-pod04f3da31e7503f3278e4440804cb7aeb.slice. Nov 23 23:02:34.190475 kubelet[2944]: I1123 23:02:34.190355 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/040b095071f4af269f8ca9b5e6cc31e7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-95\" (UID: \"040b095071f4af269f8ca9b5e6cc31e7\") " pod="kube-system/kube-apiserver-ip-172-31-29-95" Nov 23 23:02:34.190475 kubelet[2944]: I1123 23:02:34.190479 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:34.190818 kubelet[2944]: I1123 23:02:34.190569 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:34.190818 kubelet[2944]: I1123 23:02:34.190653 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:34.190818 kubelet[2944]: I1123 23:02:34.190740 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:34.191202 kubelet[2944]: I1123 23:02:34.190835 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:34.191202 kubelet[2944]: I1123 23:02:34.190923 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/040b095071f4af269f8ca9b5e6cc31e7-ca-certs\") pod \"kube-apiserver-ip-172-31-29-95\" (UID: \"040b095071f4af269f8ca9b5e6cc31e7\") " pod="kube-system/kube-apiserver-ip-172-31-29-95" Nov 23 23:02:34.191202 kubelet[2944]: I1123 23:02:34.191007 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/040b095071f4af269f8ca9b5e6cc31e7-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-95\" (UID: \"040b095071f4af269f8ca9b5e6cc31e7\") " pod="kube-system/kube-apiserver-ip-172-31-29-95" Nov 23 23:02:34.193190 kubelet[2944]: I1123 23:02:34.191096 2944 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fe7834d7d5ac4a141fc2f154dcc13d7-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-95\" (UID: \"2fe7834d7d5ac4a141fc2f154dcc13d7\") " pod="kube-system/kube-scheduler-ip-172-31-29-95" Nov 23 23:02:34.193784 kubelet[2944]: E1123 23:02:34.193741 2944 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-95?timeout=10s\": dial tcp 172.31.29.95:6443: connect: connection refused" interval="400ms" Nov 23 23:02:34.194364 kubelet[2944]: E1123 23:02:34.194334 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:34.197912 systemd[1]: Created slice kubepods-burstable-pod2fe7834d7d5ac4a141fc2f154dcc13d7.slice - libcontainer container kubepods-burstable-pod2fe7834d7d5ac4a141fc2f154dcc13d7.slice. Nov 23 23:02:34.202733 kubelet[2944]: I1123 23:02:34.202690 2944 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-95" Nov 23 23:02:34.204047 kubelet[2944]: E1123 23:02:34.203635 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:34.204047 kubelet[2944]: E1123 23:02:34.203723 2944 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.95:6443/api/v1/nodes\": dial tcp 172.31.29.95:6443: connect: connection refused" node="ip-172-31-29-95" Nov 23 23:02:34.406567 kubelet[2944]: I1123 23:02:34.406533 2944 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-95" Nov 23 23:02:34.407477 kubelet[2944]: E1123 23:02:34.407428 2944 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.95:6443/api/v1/nodes\": dial tcp 172.31.29.95:6443: connect: connection refused" node="ip-172-31-29-95" Nov 23 23:02:34.480948 containerd[2006]: time="2025-11-23T23:02:34.480789413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-95,Uid:040b095071f4af269f8ca9b5e6cc31e7,Namespace:kube-system,Attempt:0,}" Nov 23 23:02:34.502461 containerd[2006]: time="2025-11-23T23:02:34.502385921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-95,Uid:04f3da31e7503f3278e4440804cb7aeb,Namespace:kube-system,Attempt:0,}" Nov 23 23:02:34.519241 containerd[2006]: time="2025-11-23T23:02:34.519172973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-95,Uid:2fe7834d7d5ac4a141fc2f154dcc13d7,Namespace:kube-system,Attempt:0,}" Nov 23 23:02:34.525161 containerd[2006]: time="2025-11-23T23:02:34.525025049Z" level=info msg="connecting to shim 2d799bf5cff931a6e31ef678f299f848acb87acb3981aee1897ff0b93de2cc45" address="unix:///run/containerd/s/abecf860140fbac62cd907d5d7a1256987ff9e4522b0020f46466923b281f7de" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:34.594425 containerd[2006]: time="2025-11-23T23:02:34.594353477Z" level=info msg="connecting to shim feb60d196af9e828621002556b9d7e6bfbbef9b9134bfbd51477794e6cef62bf" address="unix:///run/containerd/s/148114e997683583565fb4f4c075438fa1ded50af1fc7c2e3a8a8dd2bf12490c" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:34.598041 kubelet[2944]: E1123 23:02:34.597900 2944 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-95?timeout=10s\": dial tcp 172.31.29.95:6443: connect: connection refused" interval="800ms" Nov 23 23:02:34.633475 systemd[1]: Started cri-containerd-2d799bf5cff931a6e31ef678f299f848acb87acb3981aee1897ff0b93de2cc45.scope - libcontainer container 2d799bf5cff931a6e31ef678f299f848acb87acb3981aee1897ff0b93de2cc45. Nov 23 23:02:34.657332 containerd[2006]: time="2025-11-23T23:02:34.657273114Z" level=info msg="connecting to shim 072200553f3b548375e03b4b8c0c2b0c0c877c721213fc7710e0eeb4dfa41e9d" address="unix:///run/containerd/s/a72935dc33271586b87f12b4a88bbebc40d4e77f8107a82b41350e0aefd55460" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:34.704612 systemd[1]: Started cri-containerd-feb60d196af9e828621002556b9d7e6bfbbef9b9134bfbd51477794e6cef62bf.scope - libcontainer container feb60d196af9e828621002556b9d7e6bfbbef9b9134bfbd51477794e6cef62bf. Nov 23 23:02:34.751645 systemd[1]: Started cri-containerd-072200553f3b548375e03b4b8c0c2b0c0c877c721213fc7710e0eeb4dfa41e9d.scope - libcontainer container 072200553f3b548375e03b4b8c0c2b0c0c877c721213fc7710e0eeb4dfa41e9d. Nov 23 23:02:34.817165 kubelet[2944]: I1123 23:02:34.816419 2944 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-95" Nov 23 23:02:34.817165 kubelet[2944]: E1123 23:02:34.816894 2944 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.95:6443/api/v1/nodes\": dial tcp 172.31.29.95:6443: connect: connection refused" node="ip-172-31-29-95" Nov 23 23:02:34.871372 containerd[2006]: time="2025-11-23T23:02:34.871185067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-95,Uid:040b095071f4af269f8ca9b5e6cc31e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d799bf5cff931a6e31ef678f299f848acb87acb3981aee1897ff0b93de2cc45\"" Nov 23 23:02:34.877257 containerd[2006]: time="2025-11-23T23:02:34.877188055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-95,Uid:04f3da31e7503f3278e4440804cb7aeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"feb60d196af9e828621002556b9d7e6bfbbef9b9134bfbd51477794e6cef62bf\"" Nov 23 23:02:34.881851 containerd[2006]: time="2025-11-23T23:02:34.881786383Z" level=info msg="CreateContainer within sandbox \"2d799bf5cff931a6e31ef678f299f848acb87acb3981aee1897ff0b93de2cc45\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:02:34.885948 containerd[2006]: time="2025-11-23T23:02:34.885055159Z" level=info msg="CreateContainer within sandbox \"feb60d196af9e828621002556b9d7e6bfbbef9b9134bfbd51477794e6cef62bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:02:34.901773 containerd[2006]: time="2025-11-23T23:02:34.901697827Z" level=info msg="Container 7aedb68823791f077ee11de85861364e070128654de262f2b0bc4762e025ed12: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:34.902787 containerd[2006]: time="2025-11-23T23:02:34.902724319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-95,Uid:2fe7834d7d5ac4a141fc2f154dcc13d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"072200553f3b548375e03b4b8c0c2b0c0c877c721213fc7710e0eeb4dfa41e9d\"" Nov 23 23:02:34.908154 containerd[2006]: time="2025-11-23T23:02:34.908055847Z" level=info msg="Container 28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:34.912163 containerd[2006]: time="2025-11-23T23:02:34.912057907Z" level=info msg="CreateContainer within sandbox \"072200553f3b548375e03b4b8c0c2b0c0c877c721213fc7710e0eeb4dfa41e9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:02:34.924582 containerd[2006]: time="2025-11-23T23:02:34.924498295Z" level=info msg="CreateContainer within sandbox \"2d799bf5cff931a6e31ef678f299f848acb87acb3981aee1897ff0b93de2cc45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7aedb68823791f077ee11de85861364e070128654de262f2b0bc4762e025ed12\"" Nov 23 23:02:34.929018 containerd[2006]: time="2025-11-23T23:02:34.928259455Z" level=info msg="StartContainer for \"7aedb68823791f077ee11de85861364e070128654de262f2b0bc4762e025ed12\"" Nov 23 23:02:34.934249 containerd[2006]: time="2025-11-23T23:02:34.934188631Z" level=info msg="connecting to shim 7aedb68823791f077ee11de85861364e070128654de262f2b0bc4762e025ed12" address="unix:///run/containerd/s/abecf860140fbac62cd907d5d7a1256987ff9e4522b0020f46466923b281f7de" protocol=ttrpc version=3 Nov 23 23:02:34.938574 containerd[2006]: time="2025-11-23T23:02:34.938479051Z" level=info msg="CreateContainer within sandbox \"feb60d196af9e828621002556b9d7e6bfbbef9b9134bfbd51477794e6cef62bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0\"" Nov 23 23:02:34.939645 containerd[2006]: time="2025-11-23T23:02:34.939594715Z" level=info msg="StartContainer for \"28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0\"" Nov 23 23:02:34.942044 containerd[2006]: time="2025-11-23T23:02:34.941869219Z" level=info msg="Container 51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:34.945111 containerd[2006]: time="2025-11-23T23:02:34.945052135Z" level=info msg="connecting to shim 28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0" address="unix:///run/containerd/s/148114e997683583565fb4f4c075438fa1ded50af1fc7c2e3a8a8dd2bf12490c" protocol=ttrpc version=3 Nov 23 23:02:34.960251 containerd[2006]: time="2025-11-23T23:02:34.960075811Z" level=info msg="CreateContainer within sandbox \"072200553f3b548375e03b4b8c0c2b0c0c877c721213fc7710e0eeb4dfa41e9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d\"" Nov 23 23:02:34.963812 containerd[2006]: time="2025-11-23T23:02:34.963765859Z" level=info msg="StartContainer for \"51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d\"" Nov 23 23:02:34.968675 containerd[2006]: time="2025-11-23T23:02:34.968454907Z" level=info msg="connecting to shim 51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d" address="unix:///run/containerd/s/a72935dc33271586b87f12b4a88bbebc40d4e77f8107a82b41350e0aefd55460" protocol=ttrpc version=3 Nov 23 23:02:34.986562 systemd[1]: Started cri-containerd-7aedb68823791f077ee11de85861364e070128654de262f2b0bc4762e025ed12.scope - libcontainer container 7aedb68823791f077ee11de85861364e070128654de262f2b0bc4762e025ed12. Nov 23 23:02:35.016452 systemd[1]: Started cri-containerd-28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0.scope - libcontainer container 28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0. Nov 23 23:02:35.037502 systemd[1]: Started cri-containerd-51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d.scope - libcontainer container 51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d. Nov 23 23:02:35.174759 containerd[2006]: time="2025-11-23T23:02:35.174610960Z" level=info msg="StartContainer for \"7aedb68823791f077ee11de85861364e070128654de262f2b0bc4762e025ed12\" returns successfully" Nov 23 23:02:35.196153 containerd[2006]: time="2025-11-23T23:02:35.195999544Z" level=info msg="StartContainer for \"28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0\" returns successfully" Nov 23 23:02:35.244335 kubelet[2944]: W1123 23:02:35.244109 2944 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-95&limit=500&resourceVersion=0": dial tcp 172.31.29.95:6443: connect: connection refused Nov 23 23:02:35.245876 kubelet[2944]: E1123 23:02:35.244314 2944 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-95&limit=500&resourceVersion=0\": dial tcp 172.31.29.95:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:02:35.245876 kubelet[2944]: W1123 23:02:35.245768 2944 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.95:6443: connect: connection refused Nov 23 23:02:35.245876 kubelet[2944]: E1123 23:02:35.245861 2944 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.95:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:02:35.264784 containerd[2006]: time="2025-11-23T23:02:35.264520793Z" level=info msg="StartContainer for \"51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d\" returns successfully" Nov 23 23:02:35.620523 kubelet[2944]: I1123 23:02:35.620475 2944 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-95" Nov 23 23:02:36.105740 kubelet[2944]: E1123 23:02:36.105077 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:36.113685 kubelet[2944]: E1123 23:02:36.113617 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:36.130206 kubelet[2944]: E1123 23:02:36.130157 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:37.132978 kubelet[2944]: E1123 23:02:37.132929 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:37.134255 kubelet[2944]: E1123 23:02:37.133719 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:37.136347 kubelet[2944]: E1123 23:02:37.136300 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:38.134676 kubelet[2944]: E1123 23:02:38.134627 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:38.136790 kubelet[2944]: E1123 23:02:38.135031 2944 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:40.163385 kubelet[2944]: E1123 23:02:40.163326 2944 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-95\" not found" node="ip-172-31-29-95" Nov 23 23:02:40.289231 kubelet[2944]: E1123 23:02:40.289003 2944 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-95.187ac522049c368a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-95,UID:ip-172-31-29-95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-95,},FirstTimestamp:2025-11-23 23:02:33.965876874 +0000 UTC m=+1.575457665,LastTimestamp:2025-11-23 23:02:33.965876874 +0000 UTC m=+1.575457665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-95,}" Nov 23 23:02:40.367164 kubelet[2944]: I1123 23:02:40.367082 2944 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-95" Nov 23 23:02:40.392182 kubelet[2944]: I1123 23:02:40.392106 2944 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:40.427214 kubelet[2944]: E1123 23:02:40.427012 2944 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:40.427214 kubelet[2944]: I1123 23:02:40.427080 2944 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-95" Nov 23 23:02:40.443773 kubelet[2944]: E1123 23:02:40.443362 2944 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-95" Nov 23 23:02:40.443773 kubelet[2944]: I1123 23:02:40.443421 2944 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-95" Nov 23 23:02:40.448806 kubelet[2944]: E1123 23:02:40.448759 2944 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-95" Nov 23 23:02:40.973485 kubelet[2944]: I1123 23:02:40.973433 2944 apiserver.go:52] "Watching apiserver" Nov 23 23:02:40.990099 kubelet[2944]: I1123 23:02:40.990030 2944 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:02:42.387444 systemd[1]: Reload requested from client PID 3217 ('systemctl') (unit session-7.scope)... Nov 23 23:02:42.388006 systemd[1]: Reloading... Nov 23 23:02:42.645290 zram_generator::config[3264]: No configuration found. Nov 23 23:02:43.175095 systemd[1]: Reloading finished in 786 ms. Nov 23 23:02:43.242428 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:02:43.260261 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:02:43.261193 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:02:43.261518 systemd[1]: kubelet.service: Consumed 2.368s CPU time, 128.6M memory peak. Nov 23 23:02:43.268201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:02:43.673236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:02:43.695691 (kubelet)[3321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:02:43.825154 kubelet[3321]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:02:43.825154 kubelet[3321]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:02:43.825154 kubelet[3321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:02:43.827157 kubelet[3321]: I1123 23:02:43.826280 3321 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:02:43.841389 kubelet[3321]: I1123 23:02:43.841325 3321 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:02:43.841389 kubelet[3321]: I1123 23:02:43.841378 3321 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:02:43.841908 kubelet[3321]: I1123 23:02:43.841858 3321 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:02:43.844471 kubelet[3321]: I1123 23:02:43.844413 3321 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 23:02:43.851948 kubelet[3321]: I1123 23:02:43.851376 3321 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:02:43.882098 kubelet[3321]: I1123 23:02:43.880852 3321 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:02:43.890647 kubelet[3321]: I1123 23:02:43.890584 3321 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:02:43.891137 kubelet[3321]: I1123 23:02:43.891015 3321 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:02:43.891495 kubelet[3321]: I1123 23:02:43.891075 3321 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:02:43.891665 kubelet[3321]: I1123 23:02:43.891514 3321 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:02:43.891665 kubelet[3321]: I1123 23:02:43.891544 3321 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:02:43.891665 kubelet[3321]: I1123 23:02:43.891631 3321 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:02:43.892515 kubelet[3321]: I1123 23:02:43.891929 3321 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:02:43.892515 kubelet[3321]: I1123 23:02:43.891956 3321 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:02:43.892515 kubelet[3321]: I1123 23:02:43.891998 3321 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:02:43.892515 kubelet[3321]: I1123 23:02:43.892018 3321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:02:43.901864 kubelet[3321]: I1123 23:02:43.901312 3321 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:02:43.902091 kubelet[3321]: I1123 23:02:43.902048 3321 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:02:43.906039 kubelet[3321]: I1123 23:02:43.905971 3321 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:02:43.906039 kubelet[3321]: I1123 23:02:43.906039 3321 server.go:1287] "Started kubelet" Nov 23 23:02:43.921214 kubelet[3321]: I1123 23:02:43.921046 3321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:02:43.936611 kubelet[3321]: I1123 23:02:43.935694 3321 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:02:43.938392 kubelet[3321]: I1123 23:02:43.938285 3321 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:02:43.940810 kubelet[3321]: I1123 23:02:43.940747 3321 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:02:43.943635 kubelet[3321]: I1123 23:02:43.943560 3321 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:02:43.945951 kubelet[3321]: E1123 23:02:43.944741 3321 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-95\" not found" Nov 23 23:02:43.947322 kubelet[3321]: I1123 23:02:43.947100 3321 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:02:43.948716 kubelet[3321]: I1123 23:02:43.948655 3321 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:02:43.955159 kubelet[3321]: I1123 23:02:43.954895 3321 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:02:43.959051 kubelet[3321]: I1123 23:02:43.957524 3321 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:02:43.981698 kubelet[3321]: I1123 23:02:43.981632 3321 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:02:44.005147 kubelet[3321]: I1123 23:02:44.005026 3321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:02:44.010719 kubelet[3321]: I1123 23:02:44.010645 3321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:02:44.010719 kubelet[3321]: I1123 23:02:44.010707 3321 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:02:44.010928 kubelet[3321]: I1123 23:02:44.010742 3321 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:02:44.010928 kubelet[3321]: I1123 23:02:44.010756 3321 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:02:44.010928 kubelet[3321]: E1123 23:02:44.010834 3321 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:02:44.037820 kubelet[3321]: I1123 23:02:44.036667 3321 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:02:44.038034 kubelet[3321]: I1123 23:02:44.038006 3321 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:02:44.048296 kubelet[3321]: E1123 23:02:44.048229 3321 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:02:44.111323 kubelet[3321]: E1123 23:02:44.111147 3321 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 23 23:02:44.178013 kubelet[3321]: I1123 23:02:44.177944 3321 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:02:44.178013 kubelet[3321]: I1123 23:02:44.177982 3321 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:02:44.178013 kubelet[3321]: I1123 23:02:44.178018 3321 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:02:44.178369 kubelet[3321]: I1123 23:02:44.178333 3321 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:02:44.178434 kubelet[3321]: I1123 23:02:44.178367 3321 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:02:44.178434 kubelet[3321]: I1123 23:02:44.178403 3321 policy_none.go:49] "None policy: Start" Nov 23 23:02:44.178434 kubelet[3321]: I1123 23:02:44.178421 3321 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:02:44.178629 kubelet[3321]: I1123 23:02:44.178440 3321 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:02:44.178688 kubelet[3321]: I1123 23:02:44.178664 3321 state_mem.go:75] "Updated machine memory state" Nov 23 23:02:44.190405 kubelet[3321]: I1123 23:02:44.190252 3321 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:02:44.191370 kubelet[3321]: I1123 23:02:44.190609 3321 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:02:44.191370 kubelet[3321]: I1123 23:02:44.190646 3321 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:02:44.197556 kubelet[3321]: I1123 23:02:44.197492 3321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:02:44.218777 kubelet[3321]: E1123 23:02:44.218579 3321 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:02:44.312623 kubelet[3321]: I1123 23:02:44.312513 3321 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-95" Nov 23 23:02:44.316043 kubelet[3321]: I1123 23:02:44.314525 3321 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:44.316596 kubelet[3321]: I1123 23:02:44.316544 3321 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-95" Nov 23 23:02:44.346510 kubelet[3321]: I1123 23:02:44.346308 3321 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-95" Nov 23 23:02:44.370586 kubelet[3321]: I1123 23:02:44.369915 3321 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-95" Nov 23 23:02:44.370586 kubelet[3321]: I1123 23:02:44.370451 3321 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-95" Nov 23 23:02:44.377902 kubelet[3321]: I1123 23:02:44.377378 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:44.377902 kubelet[3321]: I1123 23:02:44.377449 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:44.377902 kubelet[3321]: I1123 23:02:44.377496 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/040b095071f4af269f8ca9b5e6cc31e7-ca-certs\") pod \"kube-apiserver-ip-172-31-29-95\" (UID: \"040b095071f4af269f8ca9b5e6cc31e7\") " pod="kube-system/kube-apiserver-ip-172-31-29-95" Nov 23 23:02:44.377902 kubelet[3321]: I1123 23:02:44.377532 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:44.377902 kubelet[3321]: I1123 23:02:44.377567 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:44.379600 kubelet[3321]: I1123 23:02:44.377605 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/04f3da31e7503f3278e4440804cb7aeb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-95\" (UID: \"04f3da31e7503f3278e4440804cb7aeb\") " pod="kube-system/kube-controller-manager-ip-172-31-29-95" Nov 23 23:02:44.379600 kubelet[3321]: I1123 23:02:44.378612 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fe7834d7d5ac4a141fc2f154dcc13d7-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-95\" (UID: \"2fe7834d7d5ac4a141fc2f154dcc13d7\") " pod="kube-system/kube-scheduler-ip-172-31-29-95" Nov 23 23:02:44.379600 kubelet[3321]: I1123 23:02:44.378715 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/040b095071f4af269f8ca9b5e6cc31e7-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-95\" (UID: \"040b095071f4af269f8ca9b5e6cc31e7\") " pod="kube-system/kube-apiserver-ip-172-31-29-95" Nov 23 23:02:44.379600 kubelet[3321]: I1123 23:02:44.378786 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/040b095071f4af269f8ca9b5e6cc31e7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-95\" (UID: \"040b095071f4af269f8ca9b5e6cc31e7\") " pod="kube-system/kube-apiserver-ip-172-31-29-95" Nov 23 23:02:44.895418 kubelet[3321]: I1123 23:02:44.895362 3321 apiserver.go:52] "Watching apiserver" Nov 23 23:02:44.948279 kubelet[3321]: I1123 23:02:44.948210 3321 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:02:45.191553 kubelet[3321]: I1123 23:02:45.190604 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-95" podStartSLOduration=1.190579214 podStartE2EDuration="1.190579214s" podCreationTimestamp="2025-11-23 23:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:02:45.17195213 +0000 UTC m=+1.466696096" watchObservedRunningTime="2025-11-23 23:02:45.190579214 +0000 UTC m=+1.485323168" Nov 23 23:02:45.192634 kubelet[3321]: I1123 23:02:45.192333 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-95" podStartSLOduration=1.192311882 podStartE2EDuration="1.192311882s" podCreationTimestamp="2025-11-23 23:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:02:45.192110294 +0000 UTC m=+1.486854248" watchObservedRunningTime="2025-11-23 23:02:45.192311882 +0000 UTC m=+1.487055824" Nov 23 23:02:45.239436 kubelet[3321]: I1123 23:02:45.237867 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-95" podStartSLOduration=1.237843182 podStartE2EDuration="1.237843182s" podCreationTimestamp="2025-11-23 23:02:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:02:45.21170699 +0000 UTC m=+1.506450968" watchObservedRunningTime="2025-11-23 23:02:45.237843182 +0000 UTC m=+1.532587136" Nov 23 23:02:46.939864 kubelet[3321]: I1123 23:02:46.939817 3321 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:02:46.940628 containerd[2006]: time="2025-11-23T23:02:46.940381903Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:02:46.941840 kubelet[3321]: I1123 23:02:46.941425 3321 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:02:47.330458 update_engine[1981]: I20251123 23:02:47.330256 1981 update_attempter.cc:509] Updating boot flags... Nov 23 23:02:47.806243 kubelet[3321]: I1123 23:02:47.800295 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7690c10c-a3a1-45d4-be53-73b8400be49b-xtables-lock\") pod \"kube-proxy-8p4mc\" (UID: \"7690c10c-a3a1-45d4-be53-73b8400be49b\") " pod="kube-system/kube-proxy-8p4mc" Nov 23 23:02:47.806243 kubelet[3321]: I1123 23:02:47.800454 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7690c10c-a3a1-45d4-be53-73b8400be49b-lib-modules\") pod \"kube-proxy-8p4mc\" (UID: \"7690c10c-a3a1-45d4-be53-73b8400be49b\") " pod="kube-system/kube-proxy-8p4mc" Nov 23 23:02:47.806243 kubelet[3321]: I1123 23:02:47.800505 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhkp6\" (UniqueName: \"kubernetes.io/projected/7690c10c-a3a1-45d4-be53-73b8400be49b-kube-api-access-lhkp6\") pod \"kube-proxy-8p4mc\" (UID: \"7690c10c-a3a1-45d4-be53-73b8400be49b\") " pod="kube-system/kube-proxy-8p4mc" Nov 23 23:02:47.806243 kubelet[3321]: I1123 23:02:47.800584 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7690c10c-a3a1-45d4-be53-73b8400be49b-kube-proxy\") pod \"kube-proxy-8p4mc\" (UID: \"7690c10c-a3a1-45d4-be53-73b8400be49b\") " pod="kube-system/kube-proxy-8p4mc" Nov 23 23:02:48.079254 systemd[1]: Created slice kubepods-besteffort-pod7690c10c_a3a1_45d4_be53_73b8400be49b.slice - libcontainer container kubepods-besteffort-pod7690c10c_a3a1_45d4_be53_73b8400be49b.slice. Nov 23 23:02:48.096649 containerd[2006]: time="2025-11-23T23:02:48.096573112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8p4mc,Uid:7690c10c-a3a1-45d4-be53-73b8400be49b,Namespace:kube-system,Attempt:0,}" Nov 23 23:02:48.180190 containerd[2006]: time="2025-11-23T23:02:48.179580233Z" level=info msg="connecting to shim 5f964b309c936ff36b8e8f4e3b4415635a357639ab7846181d7e6f1caa4023bc" address="unix:///run/containerd/s/cd594dbd18c3cee1bb5ec67e6f5673f3ed0108eb8de64fae6fa45ba52426f445" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:48.226865 kubelet[3321]: W1123 23:02:48.226779 3321 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-29-95" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-29-95' and this object Nov 23 23:02:48.227659 kubelet[3321]: E1123 23:02:48.226903 3321 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ip-172-31-29-95\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-29-95' and this object" logger="UnhandledError" Nov 23 23:02:48.253390 systemd[1]: Created slice kubepods-besteffort-podb461e83c_27db_4837_bfc9_6bd723cbe8c2.slice - libcontainer container kubepods-besteffort-podb461e83c_27db_4837_bfc9_6bd723cbe8c2.slice. Nov 23 23:02:48.309520 kubelet[3321]: I1123 23:02:48.309407 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlfgb\" (UniqueName: \"kubernetes.io/projected/b461e83c-27db-4837-bfc9-6bd723cbe8c2-kube-api-access-xlfgb\") pod \"tigera-operator-7dcd859c48-hvksj\" (UID: \"b461e83c-27db-4837-bfc9-6bd723cbe8c2\") " pod="tigera-operator/tigera-operator-7dcd859c48-hvksj" Nov 23 23:02:48.309648 kubelet[3321]: I1123 23:02:48.309527 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b461e83c-27db-4837-bfc9-6bd723cbe8c2-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hvksj\" (UID: \"b461e83c-27db-4837-bfc9-6bd723cbe8c2\") " pod="tigera-operator/tigera-operator-7dcd859c48-hvksj" Nov 23 23:02:48.318528 systemd[1]: Started cri-containerd-5f964b309c936ff36b8e8f4e3b4415635a357639ab7846181d7e6f1caa4023bc.scope - libcontainer container 5f964b309c936ff36b8e8f4e3b4415635a357639ab7846181d7e6f1caa4023bc. Nov 23 23:02:48.544550 containerd[2006]: time="2025-11-23T23:02:48.542366263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8p4mc,Uid:7690c10c-a3a1-45d4-be53-73b8400be49b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f964b309c936ff36b8e8f4e3b4415635a357639ab7846181d7e6f1caa4023bc\"" Nov 23 23:02:48.556421 containerd[2006]: time="2025-11-23T23:02:48.555969331Z" level=info msg="CreateContainer within sandbox \"5f964b309c936ff36b8e8f4e3b4415635a357639ab7846181d7e6f1caa4023bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:02:48.591019 containerd[2006]: time="2025-11-23T23:02:48.590390011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hvksj,Uid:b461e83c-27db-4837-bfc9-6bd723cbe8c2,Namespace:tigera-operator,Attempt:0,}" Nov 23 23:02:48.615881 containerd[2006]: time="2025-11-23T23:02:48.615810559Z" level=info msg="Container eec4721016533dc44242d0ab90d39621e7c7ead69caa0211040e210e64c9d831: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:48.651170 containerd[2006]: time="2025-11-23T23:02:48.648026839Z" level=info msg="CreateContainer within sandbox \"5f964b309c936ff36b8e8f4e3b4415635a357639ab7846181d7e6f1caa4023bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eec4721016533dc44242d0ab90d39621e7c7ead69caa0211040e210e64c9d831\"" Nov 23 23:02:48.665800 containerd[2006]: time="2025-11-23T23:02:48.662636323Z" level=info msg="StartContainer for \"eec4721016533dc44242d0ab90d39621e7c7ead69caa0211040e210e64c9d831\"" Nov 23 23:02:48.687407 containerd[2006]: time="2025-11-23T23:02:48.687347467Z" level=info msg="connecting to shim fbf0d0266d86d8bea86c5fc3ad55534ef060e46e9f209b746fa7e6bb4b6e0746" address="unix:///run/containerd/s/d27d55fa9612ae97cd3d37b5246bb24d6ab893bddc71bb363ca026ceae8126e1" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:48.693577 containerd[2006]: time="2025-11-23T23:02:48.693523627Z" level=info msg="connecting to shim eec4721016533dc44242d0ab90d39621e7c7ead69caa0211040e210e64c9d831" address="unix:///run/containerd/s/cd594dbd18c3cee1bb5ec67e6f5673f3ed0108eb8de64fae6fa45ba52426f445" protocol=ttrpc version=3 Nov 23 23:02:48.788603 systemd[1]: Started cri-containerd-eec4721016533dc44242d0ab90d39621e7c7ead69caa0211040e210e64c9d831.scope - libcontainer container eec4721016533dc44242d0ab90d39621e7c7ead69caa0211040e210e64c9d831. Nov 23 23:02:48.830355 systemd[1]: Started cri-containerd-fbf0d0266d86d8bea86c5fc3ad55534ef060e46e9f209b746fa7e6bb4b6e0746.scope - libcontainer container fbf0d0266d86d8bea86c5fc3ad55534ef060e46e9f209b746fa7e6bb4b6e0746. Nov 23 23:02:49.198513 containerd[2006]: time="2025-11-23T23:02:49.198452454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hvksj,Uid:b461e83c-27db-4837-bfc9-6bd723cbe8c2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fbf0d0266d86d8bea86c5fc3ad55534ef060e46e9f209b746fa7e6bb4b6e0746\"" Nov 23 23:02:49.219687 containerd[2006]: time="2025-11-23T23:02:49.219601434Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 23:02:49.289949 containerd[2006]: time="2025-11-23T23:02:49.289286142Z" level=info msg="StartContainer for \"eec4721016533dc44242d0ab90d39621e7c7ead69caa0211040e210e64c9d831\" returns successfully" Nov 23 23:02:50.206010 kubelet[3321]: I1123 23:02:50.205854 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8p4mc" podStartSLOduration=3.205765855 podStartE2EDuration="3.205765855s" podCreationTimestamp="2025-11-23 23:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:02:50.203978455 +0000 UTC m=+6.498722433" watchObservedRunningTime="2025-11-23 23:02:50.205765855 +0000 UTC m=+6.500509797" Nov 23 23:02:50.322963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767387925.mount: Deactivated successfully. Nov 23 23:02:51.280594 containerd[2006]: time="2025-11-23T23:02:51.280516964Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:51.282095 containerd[2006]: time="2025-11-23T23:02:51.282017924Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 23:02:51.285183 containerd[2006]: time="2025-11-23T23:02:51.283004972Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:51.287631 containerd[2006]: time="2025-11-23T23:02:51.287564864Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:51.289321 containerd[2006]: time="2025-11-23T23:02:51.289243088Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.069564386s" Nov 23 23:02:51.289321 containerd[2006]: time="2025-11-23T23:02:51.289312292Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 23:02:51.296963 containerd[2006]: time="2025-11-23T23:02:51.296889104Z" level=info msg="CreateContainer within sandbox \"fbf0d0266d86d8bea86c5fc3ad55534ef060e46e9f209b746fa7e6bb4b6e0746\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 23:02:51.313165 containerd[2006]: time="2025-11-23T23:02:51.310197236Z" level=info msg="Container c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:51.322500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576749488.mount: Deactivated successfully. Nov 23 23:02:51.330938 containerd[2006]: time="2025-11-23T23:02:51.330867104Z" level=info msg="CreateContainer within sandbox \"fbf0d0266d86d8bea86c5fc3ad55534ef060e46e9f209b746fa7e6bb4b6e0746\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a\"" Nov 23 23:02:51.336217 containerd[2006]: time="2025-11-23T23:02:51.335999996Z" level=info msg="StartContainer for \"c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a\"" Nov 23 23:02:51.342397 containerd[2006]: time="2025-11-23T23:02:51.342331940Z" level=info msg="connecting to shim c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a" address="unix:///run/containerd/s/d27d55fa9612ae97cd3d37b5246bb24d6ab893bddc71bb363ca026ceae8126e1" protocol=ttrpc version=3 Nov 23 23:02:51.390524 systemd[1]: Started cri-containerd-c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a.scope - libcontainer container c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a. Nov 23 23:02:51.456192 containerd[2006]: time="2025-11-23T23:02:51.455728785Z" level=info msg="StartContainer for \"c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a\" returns successfully" Nov 23 23:02:52.215698 kubelet[3321]: I1123 23:02:52.215497 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hvksj" podStartSLOduration=2.132924619 podStartE2EDuration="4.215475693s" podCreationTimestamp="2025-11-23 23:02:48 +0000 UTC" firstStartedPulling="2025-11-23 23:02:49.209423226 +0000 UTC m=+5.504167156" lastFinishedPulling="2025-11-23 23:02:51.2919743 +0000 UTC m=+7.586718230" observedRunningTime="2025-11-23 23:02:52.215161305 +0000 UTC m=+8.509905271" watchObservedRunningTime="2025-11-23 23:02:52.215475693 +0000 UTC m=+8.510219647" Nov 23 23:03:00.730513 sudo[2365]: pam_unix(sudo:session): session closed for user root Nov 23 23:03:00.763297 sshd[2364]: Connection closed by 139.178.89.65 port 41494 Nov 23 23:03:00.768487 sshd-session[2361]: pam_unix(sshd:session): session closed for user core Nov 23 23:03:00.780838 systemd[1]: sshd@6-172.31.29.95:22-139.178.89.65:41494.service: Deactivated successfully. Nov 23 23:03:00.789550 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:03:00.790169 systemd[1]: session-7.scope: Consumed 10.480s CPU time, 219.9M memory peak. Nov 23 23:03:00.794585 systemd-logind[1979]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:03:00.799792 systemd-logind[1979]: Removed session 7. Nov 23 23:03:21.002081 systemd[1]: Created slice kubepods-besteffort-podd73b9c7f_3035_4339_b56b_824d110040f7.slice - libcontainer container kubepods-besteffort-podd73b9c7f_3035_4339_b56b_824d110040f7.slice. Nov 23 23:03:21.044919 kubelet[3321]: I1123 23:03:21.044432 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d73b9c7f-3035-4339-b56b-824d110040f7-typha-certs\") pod \"calico-typha-54b57b884f-ztr9k\" (UID: \"d73b9c7f-3035-4339-b56b-824d110040f7\") " pod="calico-system/calico-typha-54b57b884f-ztr9k" Nov 23 23:03:21.044919 kubelet[3321]: I1123 23:03:21.044655 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d73b9c7f-3035-4339-b56b-824d110040f7-tigera-ca-bundle\") pod \"calico-typha-54b57b884f-ztr9k\" (UID: \"d73b9c7f-3035-4339-b56b-824d110040f7\") " pod="calico-system/calico-typha-54b57b884f-ztr9k" Nov 23 23:03:21.044919 kubelet[3321]: I1123 23:03:21.044712 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8rv\" (UniqueName: \"kubernetes.io/projected/d73b9c7f-3035-4339-b56b-824d110040f7-kube-api-access-8b8rv\") pod \"calico-typha-54b57b884f-ztr9k\" (UID: \"d73b9c7f-3035-4339-b56b-824d110040f7\") " pod="calico-system/calico-typha-54b57b884f-ztr9k" Nov 23 23:03:21.303807 systemd[1]: Created slice kubepods-besteffort-podee3aa845_cb25_4fe0_bbd8_8e549ea9bb47.slice - libcontainer container kubepods-besteffort-podee3aa845_cb25_4fe0_bbd8_8e549ea9bb47.slice. Nov 23 23:03:21.316930 containerd[2006]: time="2025-11-23T23:03:21.316672645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54b57b884f-ztr9k,Uid:d73b9c7f-3035-4339-b56b-824d110040f7,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:21.348597 kubelet[3321]: I1123 23:03:21.348448 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-lib-modules\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.348597 kubelet[3321]: I1123 23:03:21.348530 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-tigera-ca-bundle\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.348597 kubelet[3321]: I1123 23:03:21.348583 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-flexvol-driver-host\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.349868 kubelet[3321]: I1123 23:03:21.348631 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-cni-net-dir\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.349868 kubelet[3321]: I1123 23:03:21.348671 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-var-lib-calico\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.349868 kubelet[3321]: I1123 23:03:21.348711 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-xtables-lock\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.349868 kubelet[3321]: I1123 23:03:21.348912 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhk4r\" (UniqueName: \"kubernetes.io/projected/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-kube-api-access-qhk4r\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.349868 kubelet[3321]: I1123 23:03:21.349577 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-cni-log-dir\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.351238 kubelet[3321]: I1123 23:03:21.350025 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-node-certs\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.351238 kubelet[3321]: I1123 23:03:21.350447 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-policysync\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.351238 kubelet[3321]: I1123 23:03:21.350660 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-var-run-calico\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.351238 kubelet[3321]: I1123 23:03:21.351036 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47-cni-bin-dir\") pod \"calico-node-t8759\" (UID: \"ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47\") " pod="calico-system/calico-node-t8759" Nov 23 23:03:21.410229 containerd[2006]: time="2025-11-23T23:03:21.409085642Z" level=info msg="connecting to shim c00daeb88b9b2ab7705c97d28cde6e8cc5536ae827f408299b76efcc642f8f77" address="unix:///run/containerd/s/3a6f0dabfda11c3cc0adf9687c104b7b9958645f426e97ee9758ed86a3e36065" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:21.460158 kubelet[3321]: E1123 23:03:21.459166 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.460158 kubelet[3321]: W1123 23:03:21.459219 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.460158 kubelet[3321]: E1123 23:03:21.459606 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.463321 kubelet[3321]: E1123 23:03:21.461808 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.464264 kubelet[3321]: W1123 23:03:21.463627 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.464264 kubelet[3321]: E1123 23:03:21.463737 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.466532 kubelet[3321]: E1123 23:03:21.466475 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.466714 kubelet[3321]: W1123 23:03:21.466519 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.469470 kubelet[3321]: E1123 23:03:21.469356 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.471215 kubelet[3321]: W1123 23:03:21.469637 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.471428 kubelet[3321]: E1123 23:03:21.469460 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.471803 kubelet[3321]: E1123 23:03:21.470965 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.472886 kubelet[3321]: E1123 23:03:21.472824 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.473063 kubelet[3321]: W1123 23:03:21.472876 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.474285 kubelet[3321]: E1123 23:03:21.473804 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.479429 kubelet[3321]: E1123 23:03:21.478541 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:03:21.481532 kubelet[3321]: E1123 23:03:21.480320 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.481532 kubelet[3321]: W1123 23:03:21.481376 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.481953 kubelet[3321]: E1123 23:03:21.481888 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.486926 kubelet[3321]: E1123 23:03:21.486563 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.491334 kubelet[3321]: W1123 23:03:21.491210 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.491801 kubelet[3321]: E1123 23:03:21.491550 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.494432 kubelet[3321]: E1123 23:03:21.494356 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.498228 kubelet[3321]: W1123 23:03:21.497245 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.498228 kubelet[3321]: E1123 23:03:21.497310 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.500688 kubelet[3321]: E1123 23:03:21.500225 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.500688 kubelet[3321]: W1123 23:03:21.500295 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.500688 kubelet[3321]: E1123 23:03:21.500420 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.502572 kubelet[3321]: E1123 23:03:21.502449 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.502948 kubelet[3321]: W1123 23:03:21.502563 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.503106 kubelet[3321]: E1123 23:03:21.502967 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.506335 kubelet[3321]: I1123 23:03:21.506111 3321 status_manager.go:890] "Failed to get status for pod" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" pod="calico-system/csi-node-driver-d6qd7" err="pods \"csi-node-driver-d6qd7\" is forbidden: User \"system:node:ip-172-31-29-95\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-95' and this object" Nov 23 23:03:21.524546 kubelet[3321]: E1123 23:03:21.524471 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.524546 kubelet[3321]: W1123 23:03:21.524520 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.524546 kubelet[3321]: E1123 23:03:21.524555 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.526720 systemd[1]: Started cri-containerd-c00daeb88b9b2ab7705c97d28cde6e8cc5536ae827f408299b76efcc642f8f77.scope - libcontainer container c00daeb88b9b2ab7705c97d28cde6e8cc5536ae827f408299b76efcc642f8f77. Nov 23 23:03:21.531486 kubelet[3321]: E1123 23:03:21.531431 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.531648 kubelet[3321]: W1123 23:03:21.531473 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.531648 kubelet[3321]: E1123 23:03:21.531552 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.535164 kubelet[3321]: E1123 23:03:21.534353 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.535164 kubelet[3321]: W1123 23:03:21.534550 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.535164 kubelet[3321]: E1123 23:03:21.534882 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.536683 kubelet[3321]: E1123 23:03:21.536586 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.536683 kubelet[3321]: W1123 23:03:21.536667 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.536905 kubelet[3321]: E1123 23:03:21.536739 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.537996 kubelet[3321]: E1123 23:03:21.537948 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.537996 kubelet[3321]: W1123 23:03:21.537986 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.538620 kubelet[3321]: E1123 23:03:21.538496 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.540263 kubelet[3321]: E1123 23:03:21.540207 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.540476 kubelet[3321]: W1123 23:03:21.540276 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.540476 kubelet[3321]: E1123 23:03:21.540314 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.540990 kubelet[3321]: E1123 23:03:21.540925 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.541096 kubelet[3321]: W1123 23:03:21.540989 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.541096 kubelet[3321]: E1123 23:03:21.541026 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.542154 kubelet[3321]: E1123 23:03:21.541685 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.542154 kubelet[3321]: W1123 23:03:21.541745 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.542154 kubelet[3321]: E1123 23:03:21.541779 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.542963 kubelet[3321]: E1123 23:03:21.542858 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.542963 kubelet[3321]: W1123 23:03:21.542930 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.543217 kubelet[3321]: E1123 23:03:21.543004 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.543861 kubelet[3321]: E1123 23:03:21.543546 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.543861 kubelet[3321]: W1123 23:03:21.543609 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.543861 kubelet[3321]: E1123 23:03:21.543639 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.545196 kubelet[3321]: E1123 23:03:21.544101 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.545196 kubelet[3321]: W1123 23:03:21.544228 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.545196 kubelet[3321]: E1123 23:03:21.544261 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.545504 kubelet[3321]: E1123 23:03:21.545216 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.545504 kubelet[3321]: W1123 23:03:21.545244 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.545504 kubelet[3321]: E1123 23:03:21.545315 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.545782 kubelet[3321]: E1123 23:03:21.545735 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.545782 kubelet[3321]: W1123 23:03:21.545771 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.545915 kubelet[3321]: E1123 23:03:21.545799 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.547834 kubelet[3321]: E1123 23:03:21.547086 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.547834 kubelet[3321]: W1123 23:03:21.547150 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.547834 kubelet[3321]: E1123 23:03:21.547182 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.547834 kubelet[3321]: E1123 23:03:21.547607 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.547834 kubelet[3321]: W1123 23:03:21.547627 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.547834 kubelet[3321]: E1123 23:03:21.547650 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.548379 kubelet[3321]: E1123 23:03:21.547983 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.548379 kubelet[3321]: W1123 23:03:21.548004 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.548379 kubelet[3321]: E1123 23:03:21.548028 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.548544 kubelet[3321]: E1123 23:03:21.548455 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.548544 kubelet[3321]: W1123 23:03:21.548481 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.548544 kubelet[3321]: E1123 23:03:21.548507 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.550160 kubelet[3321]: E1123 23:03:21.548854 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.550160 kubelet[3321]: W1123 23:03:21.548893 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.550160 kubelet[3321]: E1123 23:03:21.548922 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.550160 kubelet[3321]: E1123 23:03:21.550155 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.550466 kubelet[3321]: W1123 23:03:21.550184 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.550466 kubelet[3321]: E1123 23:03:21.550216 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.550668 kubelet[3321]: E1123 23:03:21.550624 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.550668 kubelet[3321]: W1123 23:03:21.550658 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.550798 kubelet[3321]: E1123 23:03:21.550686 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.556184 kubelet[3321]: E1123 23:03:21.555993 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.556184 kubelet[3321]: W1123 23:03:21.556042 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.556184 kubelet[3321]: E1123 23:03:21.556075 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.557095 kubelet[3321]: I1123 23:03:21.556470 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eb8960c6-f005-4ea0-b8f6-6850fa0745aa-kubelet-dir\") pod \"csi-node-driver-d6qd7\" (UID: \"eb8960c6-f005-4ea0-b8f6-6850fa0745aa\") " pod="calico-system/csi-node-driver-d6qd7" Nov 23 23:03:21.558746 kubelet[3321]: E1123 23:03:21.558638 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.558746 kubelet[3321]: W1123 23:03:21.558684 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.558746 kubelet[3321]: E1123 23:03:21.558719 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.559009 kubelet[3321]: I1123 23:03:21.558763 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eb8960c6-f005-4ea0-b8f6-6850fa0745aa-registration-dir\") pod \"csi-node-driver-d6qd7\" (UID: \"eb8960c6-f005-4ea0-b8f6-6850fa0745aa\") " pod="calico-system/csi-node-driver-d6qd7" Nov 23 23:03:21.562397 kubelet[3321]: E1123 23:03:21.562212 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.562397 kubelet[3321]: W1123 23:03:21.562262 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.562397 kubelet[3321]: E1123 23:03:21.562299 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.562397 kubelet[3321]: I1123 23:03:21.562345 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eb8960c6-f005-4ea0-b8f6-6850fa0745aa-socket-dir\") pod \"csi-node-driver-d6qd7\" (UID: \"eb8960c6-f005-4ea0-b8f6-6850fa0745aa\") " pod="calico-system/csi-node-driver-d6qd7" Nov 23 23:03:21.565610 kubelet[3321]: E1123 23:03:21.565510 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.565610 kubelet[3321]: W1123 23:03:21.565562 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.565610 kubelet[3321]: E1123 23:03:21.565600 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.565610 kubelet[3321]: I1123 23:03:21.565654 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eb8960c6-f005-4ea0-b8f6-6850fa0745aa-varrun\") pod \"csi-node-driver-d6qd7\" (UID: \"eb8960c6-f005-4ea0-b8f6-6850fa0745aa\") " pod="calico-system/csi-node-driver-d6qd7" Nov 23 23:03:21.568163 kubelet[3321]: E1123 23:03:21.568071 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.568924 kubelet[3321]: W1123 23:03:21.568833 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.568924 kubelet[3321]: E1123 23:03:21.568902 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.569101 kubelet[3321]: I1123 23:03:21.568951 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf7bd\" (UniqueName: \"kubernetes.io/projected/eb8960c6-f005-4ea0-b8f6-6850fa0745aa-kube-api-access-tf7bd\") pod \"csi-node-driver-d6qd7\" (UID: \"eb8960c6-f005-4ea0-b8f6-6850fa0745aa\") " pod="calico-system/csi-node-driver-d6qd7" Nov 23 23:03:21.570834 kubelet[3321]: E1123 23:03:21.570231 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.570834 kubelet[3321]: W1123 23:03:21.570278 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.570834 kubelet[3321]: E1123 23:03:21.570350 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.571157 kubelet[3321]: E1123 23:03:21.570925 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.571157 kubelet[3321]: W1123 23:03:21.570951 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.571157 kubelet[3321]: E1123 23:03:21.571031 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.572316 kubelet[3321]: E1123 23:03:21.571921 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.572316 kubelet[3321]: W1123 23:03:21.571950 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.573049 kubelet[3321]: E1123 23:03:21.572975 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.573971 kubelet[3321]: E1123 23:03:21.573924 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.573971 kubelet[3321]: W1123 23:03:21.573965 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.575773 kubelet[3321]: E1123 23:03:21.575606 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.577562 kubelet[3321]: E1123 23:03:21.577414 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.577562 kubelet[3321]: W1123 23:03:21.577453 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.577562 kubelet[3321]: E1123 23:03:21.577523 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.578366 kubelet[3321]: E1123 23:03:21.578315 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.578366 kubelet[3321]: W1123 23:03:21.578356 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.578366 kubelet[3321]: E1123 23:03:21.578404 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.579340 kubelet[3321]: E1123 23:03:21.579288 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.579340 kubelet[3321]: W1123 23:03:21.579328 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.579340 kubelet[3321]: E1123 23:03:21.579377 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.582252 kubelet[3321]: E1123 23:03:21.582107 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.582252 kubelet[3321]: W1123 23:03:21.582187 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.582665 kubelet[3321]: E1123 23:03:21.582535 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.585778 kubelet[3321]: E1123 23:03:21.585719 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.585778 kubelet[3321]: W1123 23:03:21.585765 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.585960 kubelet[3321]: E1123 23:03:21.585799 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.586756 kubelet[3321]: E1123 23:03:21.586705 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.586756 kubelet[3321]: W1123 23:03:21.586744 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.587169 kubelet[3321]: E1123 23:03:21.586778 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.587991 kubelet[3321]: E1123 23:03:21.587616 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.587991 kubelet[3321]: W1123 23:03:21.587656 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.587991 kubelet[3321]: E1123 23:03:21.587689 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.624711 containerd[2006]: time="2025-11-23T23:03:21.622769799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8759,Uid:ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:21.670918 kubelet[3321]: E1123 23:03:21.670573 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.670918 kubelet[3321]: W1123 23:03:21.670645 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.670918 kubelet[3321]: E1123 23:03:21.670683 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.675456 kubelet[3321]: E1123 23:03:21.675382 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.675456 kubelet[3321]: W1123 23:03:21.675443 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.675834 kubelet[3321]: E1123 23:03:21.675656 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.676708 kubelet[3321]: E1123 23:03:21.676644 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.676708 kubelet[3321]: W1123 23:03:21.676689 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.676708 kubelet[3321]: E1123 23:03:21.676760 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.680805 kubelet[3321]: E1123 23:03:21.677249 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.680805 kubelet[3321]: W1123 23:03:21.677275 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.680805 kubelet[3321]: E1123 23:03:21.677346 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.680805 kubelet[3321]: E1123 23:03:21.677677 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.680805 kubelet[3321]: W1123 23:03:21.677699 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.680805 kubelet[3321]: E1123 23:03:21.677770 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.680805 kubelet[3321]: E1123 23:03:21.678109 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.680805 kubelet[3321]: W1123 23:03:21.678194 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.680805 kubelet[3321]: E1123 23:03:21.678239 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.680805 kubelet[3321]: E1123 23:03:21.679257 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.681404 kubelet[3321]: W1123 23:03:21.679286 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.681404 kubelet[3321]: E1123 23:03:21.679333 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.683561 kubelet[3321]: E1123 23:03:21.683269 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.683561 kubelet[3321]: W1123 23:03:21.683309 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.683561 kubelet[3321]: E1123 23:03:21.683374 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.684785 kubelet[3321]: E1123 23:03:21.683931 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.684785 kubelet[3321]: W1123 23:03:21.683960 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.684785 kubelet[3321]: E1123 23:03:21.684026 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.687947 kubelet[3321]: E1123 23:03:21.686095 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.687947 kubelet[3321]: W1123 23:03:21.687418 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.687947 kubelet[3321]: E1123 23:03:21.687509 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.690364 kubelet[3321]: E1123 23:03:21.689338 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.690830 kubelet[3321]: W1123 23:03:21.690555 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.690830 kubelet[3321]: E1123 23:03:21.690666 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.695165 kubelet[3321]: E1123 23:03:21.693282 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.695165 kubelet[3321]: W1123 23:03:21.693327 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.695165 kubelet[3321]: E1123 23:03:21.693405 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.695165 kubelet[3321]: E1123 23:03:21.694241 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.695165 kubelet[3321]: W1123 23:03:21.694271 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.695165 kubelet[3321]: E1123 23:03:21.694439 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.696422 kubelet[3321]: E1123 23:03:21.696380 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.697551 kubelet[3321]: W1123 23:03:21.697237 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.697551 kubelet[3321]: E1123 23:03:21.697381 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.699299 kubelet[3321]: E1123 23:03:21.699030 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.699299 kubelet[3321]: W1123 23:03:21.699066 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.699508 kubelet[3321]: E1123 23:03:21.699226 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.699963 kubelet[3321]: E1123 23:03:21.699883 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.701356 kubelet[3321]: W1123 23:03:21.700176 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.701356 kubelet[3321]: E1123 23:03:21.700998 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.702004 kubelet[3321]: E1123 23:03:21.701699 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.702004 kubelet[3321]: W1123 23:03:21.701733 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.702004 kubelet[3321]: E1123 23:03:21.701806 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.702888 kubelet[3321]: E1123 23:03:21.702852 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.703108 kubelet[3321]: W1123 23:03:21.703073 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.703552 kubelet[3321]: E1123 23:03:21.703324 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.704336 kubelet[3321]: E1123 23:03:21.704295 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.704566 kubelet[3321]: W1123 23:03:21.704530 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.707459 kubelet[3321]: E1123 23:03:21.707235 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.707459 kubelet[3321]: W1123 23:03:21.707276 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.708110 kubelet[3321]: E1123 23:03:21.707979 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.708502 kubelet[3321]: W1123 23:03:21.708287 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.710101 kubelet[3321]: E1123 23:03:21.709270 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.710101 kubelet[3321]: W1123 23:03:21.709305 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.710101 kubelet[3321]: E1123 23:03:21.709340 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.710397 kubelet[3321]: E1123 23:03:21.709851 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.712256 kubelet[3321]: E1123 23:03:21.710277 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.712256 kubelet[3321]: E1123 23:03:21.710297 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.712740 kubelet[3321]: E1123 23:03:21.712704 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.714903 kubelet[3321]: W1123 23:03:21.714852 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.716674 kubelet[3321]: E1123 23:03:21.715090 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.719156 kubelet[3321]: E1123 23:03:21.718611 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.719156 kubelet[3321]: W1123 23:03:21.718653 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.719156 kubelet[3321]: E1123 23:03:21.718706 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.720169 containerd[2006]: time="2025-11-23T23:03:21.719609775Z" level=info msg="connecting to shim b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb" address="unix:///run/containerd/s/bf7ee186f38370ce6e67579d9ab3e91cf53516c7c94c3998669e20d92ed7c4c8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:21.720994 kubelet[3321]: E1123 23:03:21.720644 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.720994 kubelet[3321]: W1123 23:03:21.720873 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.720994 kubelet[3321]: E1123 23:03:21.720914 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.745836 kubelet[3321]: E1123 23:03:21.745789 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:21.746100 kubelet[3321]: W1123 23:03:21.745998 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:21.746100 kubelet[3321]: E1123 23:03:21.746045 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:21.775893 containerd[2006]: time="2025-11-23T23:03:21.775811572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54b57b884f-ztr9k,Uid:d73b9c7f-3035-4339-b56b-824d110040f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c00daeb88b9b2ab7705c97d28cde6e8cc5536ae827f408299b76efcc642f8f77\"" Nov 23 23:03:21.781880 containerd[2006]: time="2025-11-23T23:03:21.780181480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 23:03:21.818593 systemd[1]: Started cri-containerd-b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb.scope - libcontainer container b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb. Nov 23 23:03:21.897011 containerd[2006]: time="2025-11-23T23:03:21.896919784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t8759,Uid:ee3aa845-cb25-4fe0-bbd8-8e549ea9bb47,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb\"" Nov 23 23:03:23.013902 kubelet[3321]: E1123 23:03:23.012172 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:03:23.061889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120181833.mount: Deactivated successfully. Nov 23 23:03:24.251361 containerd[2006]: time="2025-11-23T23:03:24.251269948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:24.254215 containerd[2006]: time="2025-11-23T23:03:24.254140204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 23:03:24.258185 containerd[2006]: time="2025-11-23T23:03:24.257145268Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:24.263147 containerd[2006]: time="2025-11-23T23:03:24.261991048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:24.263621 containerd[2006]: time="2025-11-23T23:03:24.263562496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.481204396s" Nov 23 23:03:24.263828 containerd[2006]: time="2025-11-23T23:03:24.263790232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 23:03:24.268170 containerd[2006]: time="2025-11-23T23:03:24.266932372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 23:03:24.301828 containerd[2006]: time="2025-11-23T23:03:24.301759216Z" level=info msg="CreateContainer within sandbox \"c00daeb88b9b2ab7705c97d28cde6e8cc5536ae827f408299b76efcc642f8f77\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 23:03:24.328204 containerd[2006]: time="2025-11-23T23:03:24.325367812Z" level=info msg="Container f8bf01210707d3e76d6abac82f3fa0df2ac6d331e89cc658f0926ca9bb38c321: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:03:24.348934 containerd[2006]: time="2025-11-23T23:03:24.348856972Z" level=info msg="CreateContainer within sandbox \"c00daeb88b9b2ab7705c97d28cde6e8cc5536ae827f408299b76efcc642f8f77\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f8bf01210707d3e76d6abac82f3fa0df2ac6d331e89cc658f0926ca9bb38c321\"" Nov 23 23:03:24.350630 containerd[2006]: time="2025-11-23T23:03:24.350555572Z" level=info msg="StartContainer for \"f8bf01210707d3e76d6abac82f3fa0df2ac6d331e89cc658f0926ca9bb38c321\"" Nov 23 23:03:24.353295 containerd[2006]: time="2025-11-23T23:03:24.353220688Z" level=info msg="connecting to shim f8bf01210707d3e76d6abac82f3fa0df2ac6d331e89cc658f0926ca9bb38c321" address="unix:///run/containerd/s/3a6f0dabfda11c3cc0adf9687c104b7b9958645f426e97ee9758ed86a3e36065" protocol=ttrpc version=3 Nov 23 23:03:24.397480 systemd[1]: Started cri-containerd-f8bf01210707d3e76d6abac82f3fa0df2ac6d331e89cc658f0926ca9bb38c321.scope - libcontainer container f8bf01210707d3e76d6abac82f3fa0df2ac6d331e89cc658f0926ca9bb38c321. Nov 23 23:03:24.515761 containerd[2006]: time="2025-11-23T23:03:24.515595461Z" level=info msg="StartContainer for \"f8bf01210707d3e76d6abac82f3fa0df2ac6d331e89cc658f0926ca9bb38c321\" returns successfully" Nov 23 23:03:25.012505 kubelet[3321]: E1123 23:03:25.011970 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:03:25.376523 kubelet[3321]: E1123 23:03:25.376304 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.376523 kubelet[3321]: W1123 23:03:25.376513 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.377409 kubelet[3321]: E1123 23:03:25.376547 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.379485 kubelet[3321]: E1123 23:03:25.379427 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.379660 kubelet[3321]: W1123 23:03:25.379492 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.380232 kubelet[3321]: E1123 23:03:25.379792 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.381222 kubelet[3321]: E1123 23:03:25.381165 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.382327 kubelet[3321]: W1123 23:03:25.381201 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.382327 kubelet[3321]: E1123 23:03:25.381545 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.382569 kubelet[3321]: E1123 23:03:25.382491 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.382569 kubelet[3321]: W1123 23:03:25.382515 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.383854 kubelet[3321]: E1123 23:03:25.382546 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.384183 kubelet[3321]: E1123 23:03:25.383987 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.384309 kubelet[3321]: W1123 23:03:25.384233 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.384309 kubelet[3321]: E1123 23:03:25.384273 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.385502 kubelet[3321]: E1123 23:03:25.385453 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.385680 kubelet[3321]: W1123 23:03:25.385623 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.385680 kubelet[3321]: E1123 23:03:25.385662 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.387254 kubelet[3321]: E1123 23:03:25.387106 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.387254 kubelet[3321]: W1123 23:03:25.387238 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.387481 kubelet[3321]: E1123 23:03:25.387271 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.389182 kubelet[3321]: E1123 23:03:25.388384 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.389182 kubelet[3321]: W1123 23:03:25.388423 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.389182 kubelet[3321]: E1123 23:03:25.388864 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.392876 kubelet[3321]: E1123 23:03:25.392737 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.393509 kubelet[3321]: W1123 23:03:25.392777 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.393509 kubelet[3321]: E1123 23:03:25.392942 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.395447 kubelet[3321]: E1123 23:03:25.395386 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.395447 kubelet[3321]: W1123 23:03:25.395429 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.395840 kubelet[3321]: E1123 23:03:25.395462 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.398219 kubelet[3321]: E1123 23:03:25.398141 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.398219 kubelet[3321]: W1123 23:03:25.398181 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.398219 kubelet[3321]: E1123 23:03:25.398213 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.400187 kubelet[3321]: E1123 23:03:25.400047 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.401199 kubelet[3321]: W1123 23:03:25.400195 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.401199 kubelet[3321]: E1123 23:03:25.400233 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.402023 kubelet[3321]: E1123 23:03:25.401943 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.402023 kubelet[3321]: W1123 23:03:25.401988 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.402023 kubelet[3321]: E1123 23:03:25.402031 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.402923 kubelet[3321]: E1123 23:03:25.402850 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.402923 kubelet[3321]: W1123 23:03:25.402887 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.402923 kubelet[3321]: E1123 23:03:25.402917 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.405270 kubelet[3321]: E1123 23:03:25.405200 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.405270 kubelet[3321]: W1123 23:03:25.405238 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.405270 kubelet[3321]: E1123 23:03:25.405270 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.421338 kubelet[3321]: E1123 23:03:25.421287 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.421982 kubelet[3321]: W1123 23:03:25.421782 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.421982 kubelet[3321]: E1123 23:03:25.421869 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.423415 kubelet[3321]: E1123 23:03:25.423365 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.423571 kubelet[3321]: W1123 23:03:25.423430 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.423633 kubelet[3321]: E1123 23:03:25.423593 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.424047 kubelet[3321]: E1123 23:03:25.424015 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.424228 kubelet[3321]: W1123 23:03:25.424045 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.424228 kubelet[3321]: E1123 23:03:25.424110 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.424970 kubelet[3321]: E1123 23:03:25.424931 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.424970 kubelet[3321]: W1123 23:03:25.424966 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.425279 kubelet[3321]: E1123 23:03:25.425006 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.425629 kubelet[3321]: E1123 23:03:25.425591 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.425794 kubelet[3321]: W1123 23:03:25.425626 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.425794 kubelet[3321]: E1123 23:03:25.425699 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.426195 kubelet[3321]: E1123 23:03:25.426160 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.427717 kubelet[3321]: W1123 23:03:25.427361 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.427717 kubelet[3321]: E1123 23:03:25.427428 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.430008 containerd[2006]: time="2025-11-23T23:03:25.429680034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:25.430579 kubelet[3321]: E1123 23:03:25.430083 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.430579 kubelet[3321]: W1123 23:03:25.430109 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.433045 kubelet[3321]: E1123 23:03:25.431461 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.433045 kubelet[3321]: W1123 23:03:25.431634 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.433045 kubelet[3321]: E1123 23:03:25.432346 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.434581 kubelet[3321]: I1123 23:03:25.433003 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54b57b884f-ztr9k" podStartSLOduration=2.94690329 podStartE2EDuration="5.432974874s" podCreationTimestamp="2025-11-23 23:03:20 +0000 UTC" firstStartedPulling="2025-11-23 23:03:21.779290504 +0000 UTC m=+38.074034446" lastFinishedPulling="2025-11-23 23:03:24.265362088 +0000 UTC m=+40.560106030" observedRunningTime="2025-11-23 23:03:25.398568618 +0000 UTC m=+41.693312644" watchObservedRunningTime="2025-11-23 23:03:25.432974874 +0000 UTC m=+41.727718888" Nov 23 23:03:25.434936 kubelet[3321]: E1123 23:03:25.434684 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.435551 kubelet[3321]: W1123 23:03:25.434930 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.435551 kubelet[3321]: E1123 23:03:25.435535 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.438174 containerd[2006]: time="2025-11-23T23:03:25.436989162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 23:03:25.439164 kubelet[3321]: E1123 23:03:25.439086 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.440005 kubelet[3321]: E1123 23:03:25.439414 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.440005 kubelet[3321]: W1123 23:03:25.439499 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.440005 kubelet[3321]: E1123 23:03:25.439578 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.440508 containerd[2006]: time="2025-11-23T23:03:25.440453466Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:25.443169 kubelet[3321]: E1123 23:03:25.442219 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.443169 kubelet[3321]: W1123 23:03:25.442296 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.443169 kubelet[3321]: E1123 23:03:25.442383 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.444275 kubelet[3321]: E1123 23:03:25.443804 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.444275 kubelet[3321]: W1123 23:03:25.443846 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.444275 kubelet[3321]: E1123 23:03:25.443916 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.444546 kubelet[3321]: E1123 23:03:25.444350 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.444546 kubelet[3321]: W1123 23:03:25.444377 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.444546 kubelet[3321]: E1123 23:03:25.444448 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.447251 kubelet[3321]: E1123 23:03:25.446432 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.447251 kubelet[3321]: W1123 23:03:25.446499 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.448107 kubelet[3321]: E1123 23:03:25.447901 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.450345 kubelet[3321]: E1123 23:03:25.450261 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.450345 kubelet[3321]: W1123 23:03:25.450334 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.450530 kubelet[3321]: E1123 23:03:25.450466 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.452417 kubelet[3321]: E1123 23:03:25.452362 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.452417 kubelet[3321]: W1123 23:03:25.452404 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.456083 kubelet[3321]: E1123 23:03:25.455216 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.458025 kubelet[3321]: E1123 23:03:25.457505 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.458025 kubelet[3321]: W1123 23:03:25.457549 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.458025 kubelet[3321]: E1123 23:03:25.457609 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.459478 kubelet[3321]: E1123 23:03:25.458583 3321 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:03:25.459478 kubelet[3321]: W1123 23:03:25.458616 3321 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:03:25.459478 kubelet[3321]: E1123 23:03:25.458647 3321 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:03:25.463648 containerd[2006]: time="2025-11-23T23:03:25.463571586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:25.469348 containerd[2006]: time="2025-11-23T23:03:25.469262478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.20221517s" Nov 23 23:03:25.469698 containerd[2006]: time="2025-11-23T23:03:25.469535094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 23:03:25.480324 containerd[2006]: time="2025-11-23T23:03:25.480026214Z" level=info msg="CreateContainer within sandbox \"b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 23:03:25.516945 containerd[2006]: time="2025-11-23T23:03:25.516805530Z" level=info msg="Container 56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:03:25.530718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4263814396.mount: Deactivated successfully. Nov 23 23:03:25.544958 containerd[2006]: time="2025-11-23T23:03:25.544885086Z" level=info msg="CreateContainer within sandbox \"b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07\"" Nov 23 23:03:25.547447 containerd[2006]: time="2025-11-23T23:03:25.547345890Z" level=info msg="StartContainer for \"56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07\"" Nov 23 23:03:25.551947 containerd[2006]: time="2025-11-23T23:03:25.551859834Z" level=info msg="connecting to shim 56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07" address="unix:///run/containerd/s/bf7ee186f38370ce6e67579d9ab3e91cf53516c7c94c3998669e20d92ed7c4c8" protocol=ttrpc version=3 Nov 23 23:03:25.596834 systemd[1]: Started cri-containerd-56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07.scope - libcontainer container 56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07. Nov 23 23:03:25.714317 containerd[2006]: time="2025-11-23T23:03:25.713106979Z" level=info msg="StartContainer for \"56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07\" returns successfully" Nov 23 23:03:25.754644 systemd[1]: cri-containerd-56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07.scope: Deactivated successfully. Nov 23 23:03:25.768834 containerd[2006]: time="2025-11-23T23:03:25.768583927Z" level=info msg="received container exit event container_id:\"56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07\" id:\"56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07\" pid:4283 exited_at:{seconds:1763939005 nanos:767654527}" Nov 23 23:03:25.821823 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56821ad40d821e7653d059d6a5644fb8212b211da0a54b60d07823636f1d4a07-rootfs.mount: Deactivated successfully. Nov 23 23:03:26.367446 containerd[2006]: time="2025-11-23T23:03:26.367381554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 23:03:27.011284 kubelet[3321]: E1123 23:03:27.011220 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:03:29.013195 kubelet[3321]: E1123 23:03:29.011690 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:03:29.297259 containerd[2006]: time="2025-11-23T23:03:29.296823465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:29.299450 containerd[2006]: time="2025-11-23T23:03:29.299391357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:03:29.301336 containerd[2006]: time="2025-11-23T23:03:29.301244829Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:29.305899 containerd[2006]: time="2025-11-23T23:03:29.305814405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:29.308284 containerd[2006]: time="2025-11-23T23:03:29.307479105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.940028131s" Nov 23 23:03:29.308284 containerd[2006]: time="2025-11-23T23:03:29.307537293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:03:29.311874 containerd[2006]: time="2025-11-23T23:03:29.311824125Z" level=info msg="CreateContainer within sandbox \"b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:03:29.325631 containerd[2006]: time="2025-11-23T23:03:29.325579353Z" level=info msg="Container 81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:03:29.344238 containerd[2006]: time="2025-11-23T23:03:29.344186253Z" level=info msg="CreateContainer within sandbox \"b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a\"" Nov 23 23:03:29.346700 containerd[2006]: time="2025-11-23T23:03:29.346655889Z" level=info msg="StartContainer for \"81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a\"" Nov 23 23:03:29.351140 containerd[2006]: time="2025-11-23T23:03:29.350933193Z" level=info msg="connecting to shim 81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a" address="unix:///run/containerd/s/bf7ee186f38370ce6e67579d9ab3e91cf53516c7c94c3998669e20d92ed7c4c8" protocol=ttrpc version=3 Nov 23 23:03:29.412428 systemd[1]: Started cri-containerd-81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a.scope - libcontainer container 81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a. Nov 23 23:03:29.522939 containerd[2006]: time="2025-11-23T23:03:29.522857314Z" level=info msg="StartContainer for \"81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a\" returns successfully" Nov 23 23:03:30.513694 containerd[2006]: time="2025-11-23T23:03:30.513614975Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:03:30.519001 systemd[1]: cri-containerd-81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a.scope: Deactivated successfully. Nov 23 23:03:30.520303 systemd[1]: cri-containerd-81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a.scope: Consumed 949ms CPU time, 187.6M memory peak, 165.9M written to disk. Nov 23 23:03:30.526448 containerd[2006]: time="2025-11-23T23:03:30.525797123Z" level=info msg="received container exit event container_id:\"81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a\" id:\"81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a\" pid:4340 exited_at:{seconds:1763939010 nanos:525424475}" Nov 23 23:03:30.565776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81d8378525d7aac92b56c1daf4cd6a2ea7ccd56e198981183f3a49d385afb12a-rootfs.mount: Deactivated successfully. Nov 23 23:03:30.574172 kubelet[3321]: I1123 23:03:30.573598 3321 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 23:03:30.651023 systemd[1]: Created slice kubepods-burstable-podc89cfe9d_5560_450f_9829_e883ba097ecf.slice - libcontainer container kubepods-burstable-podc89cfe9d_5560_450f_9829_e883ba097ecf.slice. Nov 23 23:03:30.672602 systemd[1]: Created slice kubepods-burstable-pod57db0512_dfe1_4926_96f1_d477506ac2b6.slice - libcontainer container kubepods-burstable-pod57db0512_dfe1_4926_96f1_d477506ac2b6.slice. Nov 23 23:03:30.679918 kubelet[3321]: I1123 23:03:30.675001 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnp64\" (UniqueName: \"kubernetes.io/projected/57db0512-dfe1-4926-96f1-d477506ac2b6-kube-api-access-gnp64\") pod \"coredns-668d6bf9bc-ndcrw\" (UID: \"57db0512-dfe1-4926-96f1-d477506ac2b6\") " pod="kube-system/coredns-668d6bf9bc-ndcrw" Nov 23 23:03:30.679918 kubelet[3321]: I1123 23:03:30.675071 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ftfg\" (UniqueName: \"kubernetes.io/projected/c89cfe9d-5560-450f-9829-e883ba097ecf-kube-api-access-4ftfg\") pod \"coredns-668d6bf9bc-fhdqb\" (UID: \"c89cfe9d-5560-450f-9829-e883ba097ecf\") " pod="kube-system/coredns-668d6bf9bc-fhdqb" Nov 23 23:03:30.679918 kubelet[3321]: I1123 23:03:30.675144 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c89cfe9d-5560-450f-9829-e883ba097ecf-config-volume\") pod \"coredns-668d6bf9bc-fhdqb\" (UID: \"c89cfe9d-5560-450f-9829-e883ba097ecf\") " pod="kube-system/coredns-668d6bf9bc-fhdqb" Nov 23 23:03:30.679918 kubelet[3321]: I1123 23:03:30.675197 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57db0512-dfe1-4926-96f1-d477506ac2b6-config-volume\") pod \"coredns-668d6bf9bc-ndcrw\" (UID: \"57db0512-dfe1-4926-96f1-d477506ac2b6\") " pod="kube-system/coredns-668d6bf9bc-ndcrw" Nov 23 23:03:30.703510 systemd[1]: Created slice kubepods-besteffort-pod607d6cea_c322_4995_9bb6_13328b249dcf.slice - libcontainer container kubepods-besteffort-pod607d6cea_c322_4995_9bb6_13328b249dcf.slice. Nov 23 23:03:30.734037 systemd[1]: Created slice kubepods-besteffort-pod4be32920_a592_41ee_b676_15a5a370b665.slice - libcontainer container kubepods-besteffort-pod4be32920_a592_41ee_b676_15a5a370b665.slice. Nov 23 23:03:30.757574 kubelet[3321]: W1123 23:03:30.757488 3321 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ip-172-31-29-95" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-95' and this object Nov 23 23:03:30.757806 systemd[1]: Created slice kubepods-besteffort-pod33a858d5_f639_4092_9d21_043beaa938d2.slice - libcontainer container kubepods-besteffort-pod33a858d5_f639_4092_9d21_043beaa938d2.slice. Nov 23 23:03:30.758275 kubelet[3321]: E1123 23:03:30.757857 3321 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ip-172-31-29-95\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-95' and this object" logger="UnhandledError" Nov 23 23:03:30.759143 kubelet[3321]: W1123 23:03:30.758719 3321 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-29-95" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-95' and this object Nov 23 23:03:30.759874 kubelet[3321]: E1123 23:03:30.759477 3321 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-29-95\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-95' and this object" logger="UnhandledError" Nov 23 23:03:30.768836 kubelet[3321]: W1123 23:03:30.768701 3321 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ip-172-31-29-95" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-95' and this object Nov 23 23:03:30.769898 kubelet[3321]: E1123 23:03:30.769367 3321 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ip-172-31-29-95\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-95' and this object" logger="UnhandledError" Nov 23 23:03:30.769898 kubelet[3321]: W1123 23:03:30.769394 3321 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ip-172-31-29-95" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-95' and this object Nov 23 23:03:30.771353 kubelet[3321]: E1123 23:03:30.769704 3321 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ip-172-31-29-95\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-95' and this object" logger="UnhandledError" Nov 23 23:03:30.777384 kubelet[3321]: I1123 23:03:30.777303 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d094bbd9-4e37-478d-88c3-aa6e7c244a7b-goldmane-ca-bundle\") pod \"goldmane-666569f655-sjjzv\" (UID: \"d094bbd9-4e37-478d-88c3-aa6e7c244a7b\") " pod="calico-system/goldmane-666569f655-sjjzv" Nov 23 23:03:30.777384 kubelet[3321]: I1123 23:03:30.777417 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4be32920-a592-41ee-b676-15a5a370b665-calico-apiserver-certs\") pod \"calico-apiserver-596c4fb774-szwps\" (UID: \"4be32920-a592-41ee-b676-15a5a370b665\") " pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" Nov 23 23:03:30.777384 kubelet[3321]: I1123 23:03:30.777457 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxk77\" (UniqueName: \"kubernetes.io/projected/d094bbd9-4e37-478d-88c3-aa6e7c244a7b-kube-api-access-dxk77\") pod \"goldmane-666569f655-sjjzv\" (UID: \"d094bbd9-4e37-478d-88c3-aa6e7c244a7b\") " pod="calico-system/goldmane-666569f655-sjjzv" Nov 23 23:03:30.777384 kubelet[3321]: I1123 23:03:30.777505 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhmg8\" (UniqueName: \"kubernetes.io/projected/4be32920-a592-41ee-b676-15a5a370b665-kube-api-access-zhmg8\") pod \"calico-apiserver-596c4fb774-szwps\" (UID: \"4be32920-a592-41ee-b676-15a5a370b665\") " pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" Nov 23 23:03:30.777384 kubelet[3321]: I1123 23:03:30.777548 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-ca-bundle\") pod \"whisker-54fb88b67b-pzx5c\" (UID: \"57bd3b2d-4319-4516-874e-21e2486ed672\") " pod="calico-system/whisker-54fb88b67b-pzx5c" Nov 23 23:03:30.777967 kubelet[3321]: I1123 23:03:30.777611 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d094bbd9-4e37-478d-88c3-aa6e7c244a7b-goldmane-key-pair\") pod \"goldmane-666569f655-sjjzv\" (UID: \"d094bbd9-4e37-478d-88c3-aa6e7c244a7b\") " pod="calico-system/goldmane-666569f655-sjjzv" Nov 23 23:03:30.777967 kubelet[3321]: I1123 23:03:30.777656 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/607d6cea-c322-4995-9bb6-13328b249dcf-tigera-ca-bundle\") pod \"calico-kube-controllers-68fb77858b-7fnfw\" (UID: \"607d6cea-c322-4995-9bb6-13328b249dcf\") " pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" Nov 23 23:03:30.777967 kubelet[3321]: I1123 23:03:30.777699 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/33a858d5-f639-4092-9d21-043beaa938d2-calico-apiserver-certs\") pod \"calico-apiserver-596c4fb774-qwzhg\" (UID: \"33a858d5-f639-4092-9d21-043beaa938d2\") " pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" Nov 23 23:03:30.777967 kubelet[3321]: I1123 23:03:30.777737 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d094bbd9-4e37-478d-88c3-aa6e7c244a7b-config\") pod \"goldmane-666569f655-sjjzv\" (UID: \"d094bbd9-4e37-478d-88c3-aa6e7c244a7b\") " pod="calico-system/goldmane-666569f655-sjjzv" Nov 23 23:03:30.777967 kubelet[3321]: I1123 23:03:30.777804 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-backend-key-pair\") pod \"whisker-54fb88b67b-pzx5c\" (UID: \"57bd3b2d-4319-4516-874e-21e2486ed672\") " pod="calico-system/whisker-54fb88b67b-pzx5c" Nov 23 23:03:30.779352 kubelet[3321]: I1123 23:03:30.777846 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqlth\" (UniqueName: \"kubernetes.io/projected/33a858d5-f639-4092-9d21-043beaa938d2-kube-api-access-mqlth\") pod \"calico-apiserver-596c4fb774-qwzhg\" (UID: \"33a858d5-f639-4092-9d21-043beaa938d2\") " pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" Nov 23 23:03:30.779352 kubelet[3321]: I1123 23:03:30.777908 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkwx5\" (UniqueName: \"kubernetes.io/projected/607d6cea-c322-4995-9bb6-13328b249dcf-kube-api-access-jkwx5\") pod \"calico-kube-controllers-68fb77858b-7fnfw\" (UID: \"607d6cea-c322-4995-9bb6-13328b249dcf\") " pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" Nov 23 23:03:30.779352 kubelet[3321]: I1123 23:03:30.777946 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4j4d\" (UniqueName: \"kubernetes.io/projected/57bd3b2d-4319-4516-874e-21e2486ed672-kube-api-access-p4j4d\") pod \"whisker-54fb88b67b-pzx5c\" (UID: \"57bd3b2d-4319-4516-874e-21e2486ed672\") " pod="calico-system/whisker-54fb88b67b-pzx5c" Nov 23 23:03:30.787586 systemd[1]: Created slice kubepods-besteffort-pod57bd3b2d_4319_4516_874e_21e2486ed672.slice - libcontainer container kubepods-besteffort-pod57bd3b2d_4319_4516_874e_21e2486ed672.slice. Nov 23 23:03:30.805612 systemd[1]: Created slice kubepods-besteffort-podd094bbd9_4e37_478d_88c3_aa6e7c244a7b.slice - libcontainer container kubepods-besteffort-podd094bbd9_4e37_478d_88c3_aa6e7c244a7b.slice. Nov 23 23:03:30.968348 containerd[2006]: time="2025-11-23T23:03:30.967537033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhdqb,Uid:c89cfe9d-5560-450f-9829-e883ba097ecf,Namespace:kube-system,Attempt:0,}" Nov 23 23:03:30.988441 containerd[2006]: time="2025-11-23T23:03:30.988339849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ndcrw,Uid:57db0512-dfe1-4926-96f1-d477506ac2b6,Namespace:kube-system,Attempt:0,}" Nov 23 23:03:31.016849 containerd[2006]: time="2025-11-23T23:03:31.016786006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68fb77858b-7fnfw,Uid:607d6cea-c322-4995-9bb6-13328b249dcf,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:31.030204 systemd[1]: Created slice kubepods-besteffort-podeb8960c6_f005_4ea0_b8f6_6850fa0745aa.slice - libcontainer container kubepods-besteffort-podeb8960c6_f005_4ea0_b8f6_6850fa0745aa.slice. Nov 23 23:03:31.039920 containerd[2006]: time="2025-11-23T23:03:31.039859870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d6qd7,Uid:eb8960c6-f005-4ea0-b8f6-6850fa0745aa,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:31.048005 containerd[2006]: time="2025-11-23T23:03:31.047904910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596c4fb774-szwps,Uid:4be32920-a592-41ee-b676-15a5a370b665,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:03:31.070469 containerd[2006]: time="2025-11-23T23:03:31.070339006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596c4fb774-qwzhg,Uid:33a858d5-f639-4092-9d21-043beaa938d2,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:03:31.450459 containerd[2006]: time="2025-11-23T23:03:31.449105892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:03:31.579053 containerd[2006]: time="2025-11-23T23:03:31.578844732Z" level=error msg="Failed to destroy network for sandbox \"7344562d095910798d13a8b389808b44bbeac27bf832a52483578a6a1f51c20c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.591908 containerd[2006]: time="2025-11-23T23:03:31.591712668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596c4fb774-qwzhg,Uid:33a858d5-f639-4092-9d21-043beaa938d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7344562d095910798d13a8b389808b44bbeac27bf832a52483578a6a1f51c20c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.595151 kubelet[3321]: E1123 23:03:31.593181 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7344562d095910798d13a8b389808b44bbeac27bf832a52483578a6a1f51c20c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.597877 kubelet[3321]: E1123 23:03:31.597319 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7344562d095910798d13a8b389808b44bbeac27bf832a52483578a6a1f51c20c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" Nov 23 23:03:31.597877 kubelet[3321]: E1123 23:03:31.597375 3321 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7344562d095910798d13a8b389808b44bbeac27bf832a52483578a6a1f51c20c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" Nov 23 23:03:31.597877 kubelet[3321]: E1123 23:03:31.597450 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-596c4fb774-qwzhg_calico-apiserver(33a858d5-f639-4092-9d21-043beaa938d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-596c4fb774-qwzhg_calico-apiserver(33a858d5-f639-4092-9d21-043beaa938d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7344562d095910798d13a8b389808b44bbeac27bf832a52483578a6a1f51c20c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:03:31.602389 containerd[2006]: time="2025-11-23T23:03:31.602306412Z" level=error msg="Failed to destroy network for sandbox \"1fc0a105f7a32b841c67bfda4db66163b6640dac08d0c2515280237d63a23d7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.604835 systemd[1]: run-netns-cni\x2daa986e04\x2dbdad\x2d3a00\x2dd62a\x2dbb1fe124a0a8.mount: Deactivated successfully. Nov 23 23:03:31.611271 containerd[2006]: time="2025-11-23T23:03:31.609453372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhdqb,Uid:c89cfe9d-5560-450f-9829-e883ba097ecf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc0a105f7a32b841c67bfda4db66163b6640dac08d0c2515280237d63a23d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.611448 kubelet[3321]: E1123 23:03:31.609755 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc0a105f7a32b841c67bfda4db66163b6640dac08d0c2515280237d63a23d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.611448 kubelet[3321]: E1123 23:03:31.609824 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc0a105f7a32b841c67bfda4db66163b6640dac08d0c2515280237d63a23d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fhdqb" Nov 23 23:03:31.611448 kubelet[3321]: E1123 23:03:31.609857 3321 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fc0a105f7a32b841c67bfda4db66163b6640dac08d0c2515280237d63a23d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fhdqb" Nov 23 23:03:31.611673 kubelet[3321]: E1123 23:03:31.609915 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fhdqb_kube-system(c89cfe9d-5560-450f-9829-e883ba097ecf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fhdqb_kube-system(c89cfe9d-5560-450f-9829-e883ba097ecf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fc0a105f7a32b841c67bfda4db66163b6640dac08d0c2515280237d63a23d7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fhdqb" podUID="c89cfe9d-5560-450f-9829-e883ba097ecf" Nov 23 23:03:31.614689 systemd[1]: run-netns-cni\x2da8e613ae\x2d329d\x2d5aad\x2d1c27\x2d029d0a409a6d.mount: Deactivated successfully. Nov 23 23:03:31.636207 containerd[2006]: time="2025-11-23T23:03:31.635379313Z" level=error msg="Failed to destroy network for sandbox \"f805fcf0f21bb5c5908bfa99e20188f2143b8a9ff9531beabb960954c7e2f081\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.643415 systemd[1]: run-netns-cni\x2d0fb0a743\x2db8dd\x2dfa93\x2d0f6d\x2d74f1ca76b338.mount: Deactivated successfully. Nov 23 23:03:31.645493 containerd[2006]: time="2025-11-23T23:03:31.645342397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68fb77858b-7fnfw,Uid:607d6cea-c322-4995-9bb6-13328b249dcf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f805fcf0f21bb5c5908bfa99e20188f2143b8a9ff9531beabb960954c7e2f081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.647805 kubelet[3321]: E1123 23:03:31.647208 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f805fcf0f21bb5c5908bfa99e20188f2143b8a9ff9531beabb960954c7e2f081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.647805 kubelet[3321]: E1123 23:03:31.647303 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f805fcf0f21bb5c5908bfa99e20188f2143b8a9ff9531beabb960954c7e2f081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" Nov 23 23:03:31.647805 kubelet[3321]: E1123 23:03:31.647339 3321 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f805fcf0f21bb5c5908bfa99e20188f2143b8a9ff9531beabb960954c7e2f081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" Nov 23 23:03:31.648096 kubelet[3321]: E1123 23:03:31.647406 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68fb77858b-7fnfw_calico-system(607d6cea-c322-4995-9bb6-13328b249dcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68fb77858b-7fnfw_calico-system(607d6cea-c322-4995-9bb6-13328b249dcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f805fcf0f21bb5c5908bfa99e20188f2143b8a9ff9531beabb960954c7e2f081\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:03:31.701542 containerd[2006]: time="2025-11-23T23:03:31.701311189Z" level=error msg="Failed to destroy network for sandbox \"3e69502c8fc046b482a120967901a7e4117011934bb3040a01ec2ad4e33634cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.712293 containerd[2006]: time="2025-11-23T23:03:31.708957049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d6qd7,Uid:eb8960c6-f005-4ea0-b8f6-6850fa0745aa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e69502c8fc046b482a120967901a7e4117011934bb3040a01ec2ad4e33634cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.712948 kubelet[3321]: E1123 23:03:31.712867 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e69502c8fc046b482a120967901a7e4117011934bb3040a01ec2ad4e33634cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.715333 kubelet[3321]: E1123 23:03:31.713589 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e69502c8fc046b482a120967901a7e4117011934bb3040a01ec2ad4e33634cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d6qd7" Nov 23 23:03:31.715333 kubelet[3321]: E1123 23:03:31.713637 3321 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e69502c8fc046b482a120967901a7e4117011934bb3040a01ec2ad4e33634cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d6qd7" Nov 23 23:03:31.715333 kubelet[3321]: E1123 23:03:31.713715 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e69502c8fc046b482a120967901a7e4117011934bb3040a01ec2ad4e33634cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:03:31.755713 containerd[2006]: time="2025-11-23T23:03:31.755631913Z" level=error msg="Failed to destroy network for sandbox \"5c41155ae4faa7de7ab5af44416d09fe131dc683b8fc0c900290b7be420a93d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.757489 containerd[2006]: time="2025-11-23T23:03:31.757415125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ndcrw,Uid:57db0512-dfe1-4926-96f1-d477506ac2b6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c41155ae4faa7de7ab5af44416d09fe131dc683b8fc0c900290b7be420a93d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.758423 kubelet[3321]: E1123 23:03:31.758354 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c41155ae4faa7de7ab5af44416d09fe131dc683b8fc0c900290b7be420a93d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.760164 kubelet[3321]: E1123 23:03:31.758603 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c41155ae4faa7de7ab5af44416d09fe131dc683b8fc0c900290b7be420a93d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ndcrw" Nov 23 23:03:31.760164 kubelet[3321]: E1123 23:03:31.758644 3321 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c41155ae4faa7de7ab5af44416d09fe131dc683b8fc0c900290b7be420a93d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ndcrw" Nov 23 23:03:31.760506 kubelet[3321]: E1123 23:03:31.760452 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ndcrw_kube-system(57db0512-dfe1-4926-96f1-d477506ac2b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ndcrw_kube-system(57db0512-dfe1-4926-96f1-d477506ac2b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c41155ae4faa7de7ab5af44416d09fe131dc683b8fc0c900290b7be420a93d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ndcrw" podUID="57db0512-dfe1-4926-96f1-d477506ac2b6" Nov 23 23:03:31.766137 containerd[2006]: time="2025-11-23T23:03:31.766025533Z" level=error msg="Failed to destroy network for sandbox \"d9f4875d6414624ad75d6ebb3643e44e11e416e4eccf1d45b5eae8aeaf337b56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.774827 containerd[2006]: time="2025-11-23T23:03:31.774436177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596c4fb774-szwps,Uid:4be32920-a592-41ee-b676-15a5a370b665,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f4875d6414624ad75d6ebb3643e44e11e416e4eccf1d45b5eae8aeaf337b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.775367 kubelet[3321]: E1123 23:03:31.775275 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f4875d6414624ad75d6ebb3643e44e11e416e4eccf1d45b5eae8aeaf337b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:31.776485 kubelet[3321]: E1123 23:03:31.776252 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f4875d6414624ad75d6ebb3643e44e11e416e4eccf1d45b5eae8aeaf337b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" Nov 23 23:03:31.777256 kubelet[3321]: E1123 23:03:31.776891 3321 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9f4875d6414624ad75d6ebb3643e44e11e416e4eccf1d45b5eae8aeaf337b56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" Nov 23 23:03:31.779061 kubelet[3321]: E1123 23:03:31.778950 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-596c4fb774-szwps_calico-apiserver(4be32920-a592-41ee-b676-15a5a370b665)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-596c4fb774-szwps_calico-apiserver(4be32920-a592-41ee-b676-15a5a370b665)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9f4875d6414624ad75d6ebb3643e44e11e416e4eccf1d45b5eae8aeaf337b56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:03:31.887008 kubelet[3321]: E1123 23:03:31.886921 3321 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 23 23:03:31.888273 kubelet[3321]: E1123 23:03:31.888217 3321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d094bbd9-4e37-478d-88c3-aa6e7c244a7b-goldmane-ca-bundle podName:d094bbd9-4e37-478d-88c3-aa6e7c244a7b nodeName:}" failed. No retries permitted until 2025-11-23 23:03:32.387229302 +0000 UTC m=+48.681973244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/d094bbd9-4e37-478d-88c3-aa6e7c244a7b-goldmane-ca-bundle") pod "goldmane-666569f655-sjjzv" (UID: "d094bbd9-4e37-478d-88c3-aa6e7c244a7b") : failed to sync configmap cache: timed out waiting for the condition Nov 23 23:03:31.888451 kubelet[3321]: E1123 23:03:31.888330 3321 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 23 23:03:31.888538 kubelet[3321]: E1123 23:03:31.888509 3321 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-ca-bundle podName:57bd3b2d-4319-4516-874e-21e2486ed672 nodeName:}" failed. No retries permitted until 2025-11-23 23:03:32.38843553 +0000 UTC m=+48.683179484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-ca-bundle") pod "whisker-54fb88b67b-pzx5c" (UID: "57bd3b2d-4319-4516-874e-21e2486ed672") : failed to sync configmap cache: timed out waiting for the condition Nov 23 23:03:32.566951 systemd[1]: run-netns-cni\x2de1f21c9f\x2d45b5\x2de035\x2d70c8\x2d26b59116583b.mount: Deactivated successfully. Nov 23 23:03:32.567235 systemd[1]: run-netns-cni\x2d1024e656\x2d0c86\x2df887\x2d2848\x2de73deff2e841.mount: Deactivated successfully. Nov 23 23:03:32.567377 systemd[1]: run-netns-cni\x2dabcab554\x2d9e45\x2d190e\x2d2f7b\x2d41cba36ed4a1.mount: Deactivated successfully. Nov 23 23:03:32.597154 containerd[2006]: time="2025-11-23T23:03:32.597033253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54fb88b67b-pzx5c,Uid:57bd3b2d-4319-4516-874e-21e2486ed672,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:32.615341 containerd[2006]: time="2025-11-23T23:03:32.615058837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sjjzv,Uid:d094bbd9-4e37-478d-88c3-aa6e7c244a7b,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:32.781604 containerd[2006]: time="2025-11-23T23:03:32.781477490Z" level=error msg="Failed to destroy network for sandbox \"09747a691df9e5ff6ab82c32ae2081ed64a7ea9e47586b4bf3c1be0e3b047c44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:32.788253 containerd[2006]: time="2025-11-23T23:03:32.787867202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54fb88b67b-pzx5c,Uid:57bd3b2d-4319-4516-874e-21e2486ed672,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09747a691df9e5ff6ab82c32ae2081ed64a7ea9e47586b4bf3c1be0e3b047c44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:32.789776 kubelet[3321]: E1123 23:03:32.788878 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09747a691df9e5ff6ab82c32ae2081ed64a7ea9e47586b4bf3c1be0e3b047c44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:32.789776 kubelet[3321]: E1123 23:03:32.789283 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09747a691df9e5ff6ab82c32ae2081ed64a7ea9e47586b4bf3c1be0e3b047c44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54fb88b67b-pzx5c" Nov 23 23:03:32.789776 kubelet[3321]: E1123 23:03:32.789324 3321 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09747a691df9e5ff6ab82c32ae2081ed64a7ea9e47586b4bf3c1be0e3b047c44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54fb88b67b-pzx5c" Nov 23 23:03:32.789168 systemd[1]: run-netns-cni\x2d1b456855\x2d2b95\x2dce50\x2d0f47\x2dfc066f8b4f64.mount: Deactivated successfully. Nov 23 23:03:32.793449 kubelet[3321]: E1123 23:03:32.789402 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54fb88b67b-pzx5c_calico-system(57bd3b2d-4319-4516-874e-21e2486ed672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54fb88b67b-pzx5c_calico-system(57bd3b2d-4319-4516-874e-21e2486ed672)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09747a691df9e5ff6ab82c32ae2081ed64a7ea9e47586b4bf3c1be0e3b047c44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54fb88b67b-pzx5c" podUID="57bd3b2d-4319-4516-874e-21e2486ed672" Nov 23 23:03:32.831182 containerd[2006]: time="2025-11-23T23:03:32.828708639Z" level=error msg="Failed to destroy network for sandbox \"5e68d3efdcb3cf014bad6d09d468fe2fd2a07863e861f125588861471e5c3418\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:32.832597 containerd[2006]: time="2025-11-23T23:03:32.832478787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sjjzv,Uid:d094bbd9-4e37-478d-88c3-aa6e7c244a7b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e68d3efdcb3cf014bad6d09d468fe2fd2a07863e861f125588861471e5c3418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:32.835049 kubelet[3321]: E1123 23:03:32.834353 3321 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e68d3efdcb3cf014bad6d09d468fe2fd2a07863e861f125588861471e5c3418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:03:32.835049 kubelet[3321]: E1123 23:03:32.834433 3321 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e68d3efdcb3cf014bad6d09d468fe2fd2a07863e861f125588861471e5c3418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-sjjzv" Nov 23 23:03:32.835049 kubelet[3321]: E1123 23:03:32.834467 3321 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e68d3efdcb3cf014bad6d09d468fe2fd2a07863e861f125588861471e5c3418\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-sjjzv" Nov 23 23:03:32.836021 kubelet[3321]: E1123 23:03:32.834537 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-sjjzv_calico-system(d094bbd9-4e37-478d-88c3-aa6e7c244a7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-sjjzv_calico-system(d094bbd9-4e37-478d-88c3-aa6e7c244a7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e68d3efdcb3cf014bad6d09d468fe2fd2a07863e861f125588861471e5c3418\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:03:32.838668 systemd[1]: run-netns-cni\x2db496962e\x2d6392\x2d423e\x2dbc5b\x2d160433899ff4.mount: Deactivated successfully. Nov 23 23:03:38.284764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4277245388.mount: Deactivated successfully. Nov 23 23:03:38.364255 containerd[2006]: time="2025-11-23T23:03:38.363618018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:38.367000 containerd[2006]: time="2025-11-23T23:03:38.366912966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:03:38.368928 containerd[2006]: time="2025-11-23T23:03:38.368869926Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:38.376385 containerd[2006]: time="2025-11-23T23:03:38.376276674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:03:38.380212 containerd[2006]: time="2025-11-23T23:03:38.380155050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.929217682s" Nov 23 23:03:38.380446 containerd[2006]: time="2025-11-23T23:03:38.380416302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:03:38.454210 containerd[2006]: time="2025-11-23T23:03:38.453571554Z" level=info msg="CreateContainer within sandbox \"b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:03:38.506191 containerd[2006]: time="2025-11-23T23:03:38.504769999Z" level=info msg="Container 52ec37ec1d4048d5d85678fbaf31d9cdff9494ea13996cbe3ae3ad487bcc9c8d: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:03:38.570065 containerd[2006]: time="2025-11-23T23:03:38.569834923Z" level=info msg="CreateContainer within sandbox \"b9ccbc4ab675ddfb99393c106024e7896525df25d2fb351505cfbec32c911ccb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"52ec37ec1d4048d5d85678fbaf31d9cdff9494ea13996cbe3ae3ad487bcc9c8d\"" Nov 23 23:03:38.573471 containerd[2006]: time="2025-11-23T23:03:38.573189499Z" level=info msg="StartContainer for \"52ec37ec1d4048d5d85678fbaf31d9cdff9494ea13996cbe3ae3ad487bcc9c8d\"" Nov 23 23:03:38.582167 containerd[2006]: time="2025-11-23T23:03:38.580600303Z" level=info msg="connecting to shim 52ec37ec1d4048d5d85678fbaf31d9cdff9494ea13996cbe3ae3ad487bcc9c8d" address="unix:///run/containerd/s/bf7ee186f38370ce6e67579d9ab3e91cf53516c7c94c3998669e20d92ed7c4c8" protocol=ttrpc version=3 Nov 23 23:03:38.668480 systemd[1]: Started cri-containerd-52ec37ec1d4048d5d85678fbaf31d9cdff9494ea13996cbe3ae3ad487bcc9c8d.scope - libcontainer container 52ec37ec1d4048d5d85678fbaf31d9cdff9494ea13996cbe3ae3ad487bcc9c8d. Nov 23 23:03:38.808354 containerd[2006]: time="2025-11-23T23:03:38.808180832Z" level=info msg="StartContainer for \"52ec37ec1d4048d5d85678fbaf31d9cdff9494ea13996cbe3ae3ad487bcc9c8d\" returns successfully" Nov 23 23:03:39.086856 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:03:39.087016 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:03:39.456536 kubelet[3321]: I1123 23:03:39.456463 3321 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4j4d\" (UniqueName: \"kubernetes.io/projected/57bd3b2d-4319-4516-874e-21e2486ed672-kube-api-access-p4j4d\") pod \"57bd3b2d-4319-4516-874e-21e2486ed672\" (UID: \"57bd3b2d-4319-4516-874e-21e2486ed672\") " Nov 23 23:03:39.457193 kubelet[3321]: I1123 23:03:39.456565 3321 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-ca-bundle\") pod \"57bd3b2d-4319-4516-874e-21e2486ed672\" (UID: \"57bd3b2d-4319-4516-874e-21e2486ed672\") " Nov 23 23:03:39.457193 kubelet[3321]: I1123 23:03:39.456607 3321 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-backend-key-pair\") pod \"57bd3b2d-4319-4516-874e-21e2486ed672\" (UID: \"57bd3b2d-4319-4516-874e-21e2486ed672\") " Nov 23 23:03:39.464339 kubelet[3321]: I1123 23:03:39.464187 3321 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "57bd3b2d-4319-4516-874e-21e2486ed672" (UID: "57bd3b2d-4319-4516-874e-21e2486ed672"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:03:39.485964 systemd[1]: var-lib-kubelet-pods-57bd3b2d\x2d4319\x2d4516\x2d874e\x2d21e2486ed672-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4j4d.mount: Deactivated successfully. Nov 23 23:03:39.500163 kubelet[3321]: I1123 23:03:39.498487 3321 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57bd3b2d-4319-4516-874e-21e2486ed672-kube-api-access-p4j4d" (OuterVolumeSpecName: "kube-api-access-p4j4d") pod "57bd3b2d-4319-4516-874e-21e2486ed672" (UID: "57bd3b2d-4319-4516-874e-21e2486ed672"). InnerVolumeSpecName "kube-api-access-p4j4d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:03:39.501060 kubelet[3321]: I1123 23:03:39.500975 3321 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "57bd3b2d-4319-4516-874e-21e2486ed672" (UID: "57bd3b2d-4319-4516-874e-21e2486ed672"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:03:39.501876 systemd[1]: var-lib-kubelet-pods-57bd3b2d\x2d4319\x2d4516\x2d874e\x2d21e2486ed672-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:03:39.542190 systemd[1]: Removed slice kubepods-besteffort-pod57bd3b2d_4319_4516_874e_21e2486ed672.slice - libcontainer container kubepods-besteffort-pod57bd3b2d_4319_4516_874e_21e2486ed672.slice. Nov 23 23:03:39.559958 kubelet[3321]: I1123 23:03:39.559890 3321 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p4j4d\" (UniqueName: \"kubernetes.io/projected/57bd3b2d-4319-4516-874e-21e2486ed672-kube-api-access-p4j4d\") on node \"ip-172-31-29-95\" DevicePath \"\"" Nov 23 23:03:39.559958 kubelet[3321]: I1123 23:03:39.559945 3321 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-ca-bundle\") on node \"ip-172-31-29-95\" DevicePath \"\"" Nov 23 23:03:39.560214 kubelet[3321]: I1123 23:03:39.559970 3321 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57bd3b2d-4319-4516-874e-21e2486ed672-whisker-backend-key-pair\") on node \"ip-172-31-29-95\" DevicePath \"\"" Nov 23 23:03:39.618729 kubelet[3321]: I1123 23:03:39.618634 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t8759" podStartSLOduration=2.133757354 podStartE2EDuration="18.618601484s" podCreationTimestamp="2025-11-23 23:03:21 +0000 UTC" firstStartedPulling="2025-11-23 23:03:21.899266732 +0000 UTC m=+38.194010674" lastFinishedPulling="2025-11-23 23:03:38.384110862 +0000 UTC m=+54.678854804" observedRunningTime="2025-11-23 23:03:39.578527892 +0000 UTC m=+55.873271846" watchObservedRunningTime="2025-11-23 23:03:39.618601484 +0000 UTC m=+55.913345438" Nov 23 23:03:39.695090 kubelet[3321]: W1123 23:03:39.694899 3321 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ip-172-31-29-95" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-29-95' and this object Nov 23 23:03:39.695633 kubelet[3321]: E1123 23:03:39.695107 3321 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ip-172-31-29-95\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-95' and this object" logger="UnhandledError" Nov 23 23:03:39.695957 kubelet[3321]: I1123 23:03:39.694842 3321 status_manager.go:890] "Failed to get status for pod" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" pod="calico-system/whisker-7bb99cb694-xrvwh" err="pods \"whisker-7bb99cb694-xrvwh\" is forbidden: User \"system:node:ip-172-31-29-95\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-29-95' and this object" Nov 23 23:03:39.705149 systemd[1]: Created slice kubepods-besteffort-pod77afc798_8fc5_43e1_9a7e_049f9b28d8f3.slice - libcontainer container kubepods-besteffort-pod77afc798_8fc5_43e1_9a7e_049f9b28d8f3.slice. Nov 23 23:03:39.763457 kubelet[3321]: I1123 23:03:39.763283 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/77afc798-8fc5-43e1-9a7e-049f9b28d8f3-whisker-backend-key-pair\") pod \"whisker-7bb99cb694-xrvwh\" (UID: \"77afc798-8fc5-43e1-9a7e-049f9b28d8f3\") " pod="calico-system/whisker-7bb99cb694-xrvwh" Nov 23 23:03:39.763457 kubelet[3321]: I1123 23:03:39.763418 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77afc798-8fc5-43e1-9a7e-049f9b28d8f3-whisker-ca-bundle\") pod \"whisker-7bb99cb694-xrvwh\" (UID: \"77afc798-8fc5-43e1-9a7e-049f9b28d8f3\") " pod="calico-system/whisker-7bb99cb694-xrvwh" Nov 23 23:03:39.763652 kubelet[3321]: I1123 23:03:39.763624 3321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cchds\" (UniqueName: \"kubernetes.io/projected/77afc798-8fc5-43e1-9a7e-049f9b28d8f3-kube-api-access-cchds\") pod \"whisker-7bb99cb694-xrvwh\" (UID: \"77afc798-8fc5-43e1-9a7e-049f9b28d8f3\") " pod="calico-system/whisker-7bb99cb694-xrvwh" Nov 23 23:03:40.022748 kubelet[3321]: I1123 23:03:40.022608 3321 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57bd3b2d-4319-4516-874e-21e2486ed672" path="/var/lib/kubelet/pods/57bd3b2d-4319-4516-874e-21e2486ed672/volumes" Nov 23 23:03:40.149782 systemd[1]: Started sshd@7-172.31.29.95:22-139.178.89.65:38558.service - OpenSSH per-connection server daemon (139.178.89.65:38558). Nov 23 23:03:40.391488 sshd[4673]: Accepted publickey for core from 139.178.89.65 port 38558 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:03:40.393716 sshd-session[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:03:40.408932 systemd-logind[1979]: New session 8 of user core. Nov 23 23:03:40.420527 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:03:40.808300 sshd[4684]: Connection closed by 139.178.89.65 port 38558 Nov 23 23:03:40.807858 sshd-session[4673]: pam_unix(sshd:session): session closed for user core Nov 23 23:03:40.819918 systemd[1]: sshd@7-172.31.29.95:22-139.178.89.65:38558.service: Deactivated successfully. Nov 23 23:03:40.828260 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:03:40.833397 systemd-logind[1979]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:03:40.835820 systemd-logind[1979]: Removed session 8. Nov 23 23:03:40.916297 containerd[2006]: time="2025-11-23T23:03:40.916235711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bb99cb694-xrvwh,Uid:77afc798-8fc5-43e1-9a7e-049f9b28d8f3,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:41.287699 (udev-worker)[4631]: Network interface NamePolicy= disabled on kernel command line. Nov 23 23:03:41.290004 systemd-networkd[1811]: calid1157bd9a50: Link UP Nov 23 23:03:41.291328 systemd-networkd[1811]: calid1157bd9a50: Gained carrier Nov 23 23:03:41.328315 containerd[2006]: 2025-11-23 23:03:40.975 [INFO][4725] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:03:41.328315 containerd[2006]: 2025-11-23 23:03:41.074 [INFO][4725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0 whisker-7bb99cb694- calico-system 77afc798-8fc5-43e1-9a7e-049f9b28d8f3 930 0 2025-11-23 23:03:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bb99cb694 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-29-95 whisker-7bb99cb694-xrvwh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid1157bd9a50 [] [] }} ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Namespace="calico-system" Pod="whisker-7bb99cb694-xrvwh" WorkloadEndpoint="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-" Nov 23 23:03:41.328315 containerd[2006]: 2025-11-23 23:03:41.075 [INFO][4725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Namespace="calico-system" Pod="whisker-7bb99cb694-xrvwh" WorkloadEndpoint="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" Nov 23 23:03:41.328315 containerd[2006]: 2025-11-23 23:03:41.171 [INFO][4736] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" HandleID="k8s-pod-network.51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Workload="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" Nov 23 23:03:41.328926 containerd[2006]: 2025-11-23 23:03:41.172 [INFO][4736] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" HandleID="k8s-pod-network.51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Workload="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b940), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-95", "pod":"whisker-7bb99cb694-xrvwh", "timestamp":"2025-11-23 23:03:41.171900872 +0000 UTC"}, Hostname:"ip-172-31-29-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:03:41.328926 containerd[2006]: 2025-11-23 23:03:41.172 [INFO][4736] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:03:41.328926 containerd[2006]: 2025-11-23 23:03:41.172 [INFO][4736] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:03:41.328926 containerd[2006]: 2025-11-23 23:03:41.173 [INFO][4736] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-95' Nov 23 23:03:41.328926 containerd[2006]: 2025-11-23 23:03:41.193 [INFO][4736] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" host="ip-172-31-29-95" Nov 23 23:03:41.328926 containerd[2006]: 2025-11-23 23:03:41.208 [INFO][4736] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-95" Nov 23 23:03:41.328926 containerd[2006]: 2025-11-23 23:03:41.228 [INFO][4736] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:41.328926 containerd[2006]: 2025-11-23 23:03:41.233 [INFO][4736] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:41.328926 containerd[2006]: 2025-11-23 23:03:41.237 [INFO][4736] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:41.331066 containerd[2006]: 2025-11-23 23:03:41.237 [INFO][4736] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" host="ip-172-31-29-95" Nov 23 23:03:41.331066 containerd[2006]: 2025-11-23 23:03:41.240 [INFO][4736] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea Nov 23 23:03:41.331066 containerd[2006]: 2025-11-23 23:03:41.251 [INFO][4736] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" host="ip-172-31-29-95" Nov 23 23:03:41.331066 containerd[2006]: 2025-11-23 23:03:41.261 [INFO][4736] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.193/26] block=192.168.121.192/26 handle="k8s-pod-network.51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" host="ip-172-31-29-95" Nov 23 23:03:41.331066 containerd[2006]: 2025-11-23 23:03:41.261 [INFO][4736] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.193/26] handle="k8s-pod-network.51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" host="ip-172-31-29-95" Nov 23 23:03:41.331066 containerd[2006]: 2025-11-23 23:03:41.262 [INFO][4736] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:03:41.331066 containerd[2006]: 2025-11-23 23:03:41.262 [INFO][4736] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.193/26] IPv6=[] ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" HandleID="k8s-pod-network.51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Workload="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" Nov 23 23:03:41.332772 containerd[2006]: 2025-11-23 23:03:41.270 [INFO][4725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Namespace="calico-system" Pod="whisker-7bb99cb694-xrvwh" WorkloadEndpoint="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0", GenerateName:"whisker-7bb99cb694-", Namespace:"calico-system", SelfLink:"", UID:"77afc798-8fc5-43e1-9a7e-049f9b28d8f3", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bb99cb694", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"", Pod:"whisker-7bb99cb694-xrvwh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid1157bd9a50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:41.332772 containerd[2006]: 2025-11-23 23:03:41.270 [INFO][4725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.193/32] ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Namespace="calico-system" Pod="whisker-7bb99cb694-xrvwh" WorkloadEndpoint="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" Nov 23 23:03:41.333277 containerd[2006]: 2025-11-23 23:03:41.270 [INFO][4725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1157bd9a50 ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Namespace="calico-system" Pod="whisker-7bb99cb694-xrvwh" WorkloadEndpoint="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" Nov 23 23:03:41.333277 containerd[2006]: 2025-11-23 23:03:41.293 [INFO][4725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Namespace="calico-system" Pod="whisker-7bb99cb694-xrvwh" WorkloadEndpoint="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" Nov 23 23:03:41.333426 containerd[2006]: 2025-11-23 23:03:41.294 [INFO][4725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Namespace="calico-system" Pod="whisker-7bb99cb694-xrvwh" WorkloadEndpoint="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0", GenerateName:"whisker-7bb99cb694-", Namespace:"calico-system", SelfLink:"", UID:"77afc798-8fc5-43e1-9a7e-049f9b28d8f3", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bb99cb694", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea", Pod:"whisker-7bb99cb694-xrvwh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid1157bd9a50", MAC:"a2:9d:b9:e8:73:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:41.333559 containerd[2006]: 2025-11-23 23:03:41.320 [INFO][4725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" Namespace="calico-system" Pod="whisker-7bb99cb694-xrvwh" WorkloadEndpoint="ip--172--31--29--95-k8s-whisker--7bb99cb694--xrvwh-eth0" Nov 23 23:03:41.422470 containerd[2006]: time="2025-11-23T23:03:41.422375073Z" level=info msg="connecting to shim 51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea" address="unix:///run/containerd/s/ef9376a8a3e4ff7c274036fbdc74decb8b7a03d93b87f9d993d796f8fc24e186" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:41.530765 systemd[1]: Started cri-containerd-51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea.scope - libcontainer container 51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea. Nov 23 23:03:41.823510 containerd[2006]: time="2025-11-23T23:03:41.823442195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bb99cb694-xrvwh,Uid:77afc798-8fc5-43e1-9a7e-049f9b28d8f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"51a531292b1b58dcce71bb1f1e64adf88b2c20c176b46edba3b00186513879ea\"" Nov 23 23:03:41.830158 containerd[2006]: time="2025-11-23T23:03:41.830078615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:03:42.016379 containerd[2006]: time="2025-11-23T23:03:42.016227776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596c4fb774-szwps,Uid:4be32920-a592-41ee-b676-15a5a370b665,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:03:42.105966 containerd[2006]: time="2025-11-23T23:03:42.105435297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:42.111008 containerd[2006]: time="2025-11-23T23:03:42.110174229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:03:42.111008 containerd[2006]: time="2025-11-23T23:03:42.110333589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:03:42.112568 kubelet[3321]: E1123 23:03:42.112416 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:03:42.115607 kubelet[3321]: E1123 23:03:42.114175 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:03:42.123232 kubelet[3321]: E1123 23:03:42.123083 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f58e4ff569304c459c01f849858ad86b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cchds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bb99cb694-xrvwh_calico-system(77afc798-8fc5-43e1-9a7e-049f9b28d8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:42.132873 containerd[2006]: time="2025-11-23T23:03:42.132631413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:03:42.407838 containerd[2006]: time="2025-11-23T23:03:42.407603794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:42.410735 containerd[2006]: time="2025-11-23T23:03:42.410658934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:03:42.411066 containerd[2006]: time="2025-11-23T23:03:42.410704294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:03:42.413227 kubelet[3321]: E1123 23:03:42.412454 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:03:42.413414 kubelet[3321]: E1123 23:03:42.413231 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:03:42.413613 kubelet[3321]: E1123 23:03:42.413438 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cchds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bb99cb694-xrvwh_calico-system(77afc798-8fc5-43e1-9a7e-049f9b28d8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:42.415108 kubelet[3321]: E1123 23:03:42.415023 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:03:42.450420 systemd-networkd[1811]: cali808c5f05f85: Link UP Nov 23 23:03:42.454076 systemd-networkd[1811]: cali808c5f05f85: Gained carrier Nov 23 23:03:42.521937 containerd[2006]: 2025-11-23 23:03:42.194 [INFO][4882] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:03:42.521937 containerd[2006]: 2025-11-23 23:03:42.238 [INFO][4882] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0 calico-apiserver-596c4fb774- calico-apiserver 4be32920-a592-41ee-b676-15a5a370b665 862 0 2025-11-23 23:03:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:596c4fb774 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-95 calico-apiserver-596c4fb774-szwps eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali808c5f05f85 [] [] }} ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-szwps" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-" Nov 23 23:03:42.521937 containerd[2006]: 2025-11-23 23:03:42.238 [INFO][4882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-szwps" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" Nov 23 23:03:42.521937 containerd[2006]: 2025-11-23 23:03:42.317 [INFO][4895] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" HandleID="k8s-pod-network.fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Workload="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" Nov 23 23:03:42.522918 containerd[2006]: 2025-11-23 23:03:42.318 [INFO][4895] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" HandleID="k8s-pod-network.fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Workload="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-95", "pod":"calico-apiserver-596c4fb774-szwps", "timestamp":"2025-11-23 23:03:42.31753087 +0000 UTC"}, Hostname:"ip-172-31-29-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:03:42.522918 containerd[2006]: 2025-11-23 23:03:42.318 [INFO][4895] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:03:42.522918 containerd[2006]: 2025-11-23 23:03:42.318 [INFO][4895] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:03:42.522918 containerd[2006]: 2025-11-23 23:03:42.318 [INFO][4895] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-95' Nov 23 23:03:42.522918 containerd[2006]: 2025-11-23 23:03:42.335 [INFO][4895] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" host="ip-172-31-29-95" Nov 23 23:03:42.522918 containerd[2006]: 2025-11-23 23:03:42.347 [INFO][4895] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-95" Nov 23 23:03:42.522918 containerd[2006]: 2025-11-23 23:03:42.356 [INFO][4895] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:42.522918 containerd[2006]: 2025-11-23 23:03:42.364 [INFO][4895] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:42.522918 containerd[2006]: 2025-11-23 23:03:42.368 [INFO][4895] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:42.525003 containerd[2006]: 2025-11-23 23:03:42.369 [INFO][4895] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" host="ip-172-31-29-95" Nov 23 23:03:42.525003 containerd[2006]: 2025-11-23 23:03:42.372 [INFO][4895] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5 Nov 23 23:03:42.525003 containerd[2006]: 2025-11-23 23:03:42.383 [INFO][4895] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" host="ip-172-31-29-95" Nov 23 23:03:42.525003 containerd[2006]: 2025-11-23 23:03:42.429 [INFO][4895] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.194/26] block=192.168.121.192/26 handle="k8s-pod-network.fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" host="ip-172-31-29-95" Nov 23 23:03:42.525003 containerd[2006]: 2025-11-23 23:03:42.429 [INFO][4895] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.194/26] handle="k8s-pod-network.fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" host="ip-172-31-29-95" Nov 23 23:03:42.525003 containerd[2006]: 2025-11-23 23:03:42.429 [INFO][4895] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:03:42.525003 containerd[2006]: 2025-11-23 23:03:42.429 [INFO][4895] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.194/26] IPv6=[] ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" HandleID="k8s-pod-network.fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Workload="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" Nov 23 23:03:42.526688 containerd[2006]: 2025-11-23 23:03:42.435 [INFO][4882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-szwps" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0", GenerateName:"calico-apiserver-596c4fb774-", Namespace:"calico-apiserver", SelfLink:"", UID:"4be32920-a592-41ee-b676-15a5a370b665", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596c4fb774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"", Pod:"calico-apiserver-596c4fb774-szwps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali808c5f05f85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:42.526947 containerd[2006]: 2025-11-23 23:03:42.435 [INFO][4882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.194/32] ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-szwps" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" Nov 23 23:03:42.526947 containerd[2006]: 2025-11-23 23:03:42.435 [INFO][4882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali808c5f05f85 ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-szwps" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" Nov 23 23:03:42.526947 containerd[2006]: 2025-11-23 23:03:42.459 [INFO][4882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-szwps" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" Nov 23 23:03:42.527099 containerd[2006]: 2025-11-23 23:03:42.464 [INFO][4882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-szwps" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0", GenerateName:"calico-apiserver-596c4fb774-", Namespace:"calico-apiserver", SelfLink:"", UID:"4be32920-a592-41ee-b676-15a5a370b665", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596c4fb774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5", Pod:"calico-apiserver-596c4fb774-szwps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali808c5f05f85", MAC:"a2:7b:19:45:dd:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:42.529610 containerd[2006]: 2025-11-23 23:03:42.511 [INFO][4882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-szwps" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--szwps-eth0" Nov 23 23:03:42.538276 kubelet[3321]: E1123 23:03:42.538206 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:03:42.555298 systemd-networkd[1811]: calid1157bd9a50: Gained IPv6LL Nov 23 23:03:42.608161 containerd[2006]: time="2025-11-23T23:03:42.608055359Z" level=info msg="connecting to shim fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5" address="unix:///run/containerd/s/caaa0f7add701f55c75c37ae99d7252115b3777ab102aa8680ed156223a558fc" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:42.680813 systemd[1]: Started cri-containerd-fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5.scope - libcontainer container fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5. Nov 23 23:03:42.930933 containerd[2006]: time="2025-11-23T23:03:42.930649573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596c4fb774-szwps,Uid:4be32920-a592-41ee-b676-15a5a370b665,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fc856c6bb0b32bb535ef73163db145c451d694924618f63808a9bdfc75a6f1f5\"" Nov 23 23:03:42.938394 containerd[2006]: time="2025-11-23T23:03:42.938227429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:03:43.013655 containerd[2006]: time="2025-11-23T23:03:43.013576413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d6qd7,Uid:eb8960c6-f005-4ea0-b8f6-6850fa0745aa,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:43.014342 containerd[2006]: time="2025-11-23T23:03:43.014174013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ndcrw,Uid:57db0512-dfe1-4926-96f1-d477506ac2b6,Namespace:kube-system,Attempt:0,}" Nov 23 23:03:43.203153 containerd[2006]: time="2025-11-23T23:03:43.202352590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:43.207736 containerd[2006]: time="2025-11-23T23:03:43.206937682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:03:43.207736 containerd[2006]: time="2025-11-23T23:03:43.207177754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:43.207962 kubelet[3321]: E1123 23:03:43.207448 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:43.207962 kubelet[3321]: E1123 23:03:43.207518 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:43.207962 kubelet[3321]: E1123 23:03:43.207850 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhmg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596c4fb774-szwps_calico-apiserver(4be32920-a592-41ee-b676-15a5a370b665): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:43.211494 kubelet[3321]: E1123 23:03:43.209864 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:03:43.508565 systemd-networkd[1811]: caliebe3b9e73a2: Link UP Nov 23 23:03:43.510540 systemd-networkd[1811]: caliebe3b9e73a2: Gained carrier Nov 23 23:03:43.573594 kubelet[3321]: E1123 23:03:43.573479 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:03:43.573838 kubelet[3321]: E1123 23:03:43.573768 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:03:43.591383 containerd[2006]: 2025-11-23 23:03:43.195 [INFO][4964] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:03:43.591383 containerd[2006]: 2025-11-23 23:03:43.288 [INFO][4964] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0 coredns-668d6bf9bc- kube-system 57db0512-dfe1-4926-96f1-d477506ac2b6 867 0 2025-11-23 23:02:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-95 coredns-668d6bf9bc-ndcrw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebe3b9e73a2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-ndcrw" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-" Nov 23 23:03:43.591383 containerd[2006]: 2025-11-23 23:03:43.288 [INFO][4964] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-ndcrw" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" Nov 23 23:03:43.591383 containerd[2006]: 2025-11-23 23:03:43.391 [INFO][4986] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" HandleID="k8s-pod-network.fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Workload="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" Nov 23 23:03:43.593318 containerd[2006]: 2025-11-23 23:03:43.391 [INFO][4986] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" HandleID="k8s-pod-network.fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Workload="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103790), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-95", "pod":"coredns-668d6bf9bc-ndcrw", "timestamp":"2025-11-23 23:03:43.391095203 +0000 UTC"}, Hostname:"ip-172-31-29-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:03:43.593318 containerd[2006]: 2025-11-23 23:03:43.391 [INFO][4986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:03:43.593318 containerd[2006]: 2025-11-23 23:03:43.391 [INFO][4986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:03:43.593318 containerd[2006]: 2025-11-23 23:03:43.391 [INFO][4986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-95' Nov 23 23:03:43.593318 containerd[2006]: 2025-11-23 23:03:43.430 [INFO][4986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" host="ip-172-31-29-95" Nov 23 23:03:43.593318 containerd[2006]: 2025-11-23 23:03:43.439 [INFO][4986] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-95" Nov 23 23:03:43.593318 containerd[2006]: 2025-11-23 23:03:43.449 [INFO][4986] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:43.593318 containerd[2006]: 2025-11-23 23:03:43.452 [INFO][4986] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:43.593318 containerd[2006]: 2025-11-23 23:03:43.458 [INFO][4986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:43.593880 containerd[2006]: 2025-11-23 23:03:43.458 [INFO][4986] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" host="ip-172-31-29-95" Nov 23 23:03:43.593880 containerd[2006]: 2025-11-23 23:03:43.463 [INFO][4986] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2 Nov 23 23:03:43.593880 containerd[2006]: 2025-11-23 23:03:43.473 [INFO][4986] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" host="ip-172-31-29-95" Nov 23 23:03:43.593880 containerd[2006]: 2025-11-23 23:03:43.487 [INFO][4986] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.195/26] block=192.168.121.192/26 handle="k8s-pod-network.fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" host="ip-172-31-29-95" Nov 23 23:03:43.593880 containerd[2006]: 2025-11-23 23:03:43.487 [INFO][4986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.195/26] handle="k8s-pod-network.fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" host="ip-172-31-29-95" Nov 23 23:03:43.593880 containerd[2006]: 2025-11-23 23:03:43.488 [INFO][4986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:03:43.593880 containerd[2006]: 2025-11-23 23:03:43.488 [INFO][4986] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.195/26] IPv6=[] ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" HandleID="k8s-pod-network.fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Workload="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" Nov 23 23:03:43.597396 containerd[2006]: 2025-11-23 23:03:43.493 [INFO][4964] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-ndcrw" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"57db0512-dfe1-4926-96f1-d477506ac2b6", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 2, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"", Pod:"coredns-668d6bf9bc-ndcrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebe3b9e73a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:43.597396 containerd[2006]: 2025-11-23 23:03:43.493 [INFO][4964] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.195/32] ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-ndcrw" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" Nov 23 23:03:43.597396 containerd[2006]: 2025-11-23 23:03:43.493 [INFO][4964] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebe3b9e73a2 ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-ndcrw" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" Nov 23 23:03:43.597396 containerd[2006]: 2025-11-23 23:03:43.517 [INFO][4964] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-ndcrw" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" Nov 23 23:03:43.597396 containerd[2006]: 2025-11-23 23:03:43.521 [INFO][4964] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-ndcrw" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"57db0512-dfe1-4926-96f1-d477506ac2b6", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 2, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2", Pod:"coredns-668d6bf9bc-ndcrw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebe3b9e73a2", MAC:"7e:ce:59:44:00:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:43.597396 containerd[2006]: 2025-11-23 23:03:43.581 [INFO][4964] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" Namespace="kube-system" Pod="coredns-668d6bf9bc-ndcrw" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--ndcrw-eth0" Nov 23 23:03:43.711843 containerd[2006]: time="2025-11-23T23:03:43.711768949Z" level=info msg="connecting to shim fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2" address="unix:///run/containerd/s/05d11bfa32353c47e71612652b26c61a1afed259a7f840164430a978c64c20f8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:43.762499 systemd-networkd[1811]: calib536813f793: Link UP Nov 23 23:03:43.766857 systemd-networkd[1811]: calib536813f793: Gained carrier Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.245 [INFO][4967] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.298 [INFO][4967] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0 csi-node-driver- calico-system eb8960c6-f005-4ea0-b8f6-6850fa0745aa 787 0 2025-11-23 23:03:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-29-95 csi-node-driver-d6qd7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib536813f793 [] [] }} ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Namespace="calico-system" Pod="csi-node-driver-d6qd7" WorkloadEndpoint="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.299 [INFO][4967] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Namespace="calico-system" Pod="csi-node-driver-d6qd7" WorkloadEndpoint="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.410 [INFO][4991] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" HandleID="k8s-pod-network.2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Workload="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.410 [INFO][4991] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" HandleID="k8s-pod-network.2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Workload="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003283b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-95", "pod":"csi-node-driver-d6qd7", "timestamp":"2025-11-23 23:03:43.410385479 +0000 UTC"}, Hostname:"ip-172-31-29-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.410 [INFO][4991] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.488 [INFO][4991] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.488 [INFO][4991] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-95' Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.538 [INFO][4991] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" host="ip-172-31-29-95" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.582 [INFO][4991] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-95" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.631 [INFO][4991] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.652 [INFO][4991] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.674 [INFO][4991] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.675 [INFO][4991] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" host="ip-172-31-29-95" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.685 [INFO][4991] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257 Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.716 [INFO][4991] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" host="ip-172-31-29-95" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.733 [INFO][4991] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.196/26] block=192.168.121.192/26 handle="k8s-pod-network.2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" host="ip-172-31-29-95" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.733 [INFO][4991] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.196/26] handle="k8s-pod-network.2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" host="ip-172-31-29-95" Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.733 [INFO][4991] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:03:43.821099 containerd[2006]: 2025-11-23 23:03:43.733 [INFO][4991] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.196/26] IPv6=[] ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" HandleID="k8s-pod-network.2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Workload="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" Nov 23 23:03:43.824654 containerd[2006]: 2025-11-23 23:03:43.746 [INFO][4967] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Namespace="calico-system" Pod="csi-node-driver-d6qd7" WorkloadEndpoint="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb8960c6-f005-4ea0-b8f6-6850fa0745aa", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"", Pod:"csi-node-driver-d6qd7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib536813f793", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:43.824654 containerd[2006]: 2025-11-23 23:03:43.746 [INFO][4967] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.196/32] ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Namespace="calico-system" Pod="csi-node-driver-d6qd7" WorkloadEndpoint="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" Nov 23 23:03:43.824654 containerd[2006]: 2025-11-23 23:03:43.746 [INFO][4967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib536813f793 ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Namespace="calico-system" Pod="csi-node-driver-d6qd7" WorkloadEndpoint="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" Nov 23 23:03:43.824654 containerd[2006]: 2025-11-23 23:03:43.768 [INFO][4967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Namespace="calico-system" Pod="csi-node-driver-d6qd7" WorkloadEndpoint="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" Nov 23 23:03:43.824654 containerd[2006]: 2025-11-23 23:03:43.777 [INFO][4967] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Namespace="calico-system" Pod="csi-node-driver-d6qd7" WorkloadEndpoint="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eb8960c6-f005-4ea0-b8f6-6850fa0745aa", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257", Pod:"csi-node-driver-d6qd7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib536813f793", MAC:"0a:f6:a2:fe:3b:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:43.824654 containerd[2006]: 2025-11-23 23:03:43.813 [INFO][4967] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" Namespace="calico-system" Pod="csi-node-driver-d6qd7" WorkloadEndpoint="ip--172--31--29--95-k8s-csi--node--driver--d6qd7-eth0" Nov 23 23:03:43.863478 systemd[1]: Started cri-containerd-fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2.scope - libcontainer container fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2. Nov 23 23:03:43.917594 containerd[2006]: time="2025-11-23T23:03:43.917515334Z" level=info msg="connecting to shim 2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257" address="unix:///run/containerd/s/3fd2198c9cf7f3bfde9bd16ecddc61568d6af225f61f6f91f5ef0754c18d12fb" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:44.039422 systemd[1]: Started cri-containerd-2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257.scope - libcontainer container 2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257. Nov 23 23:03:44.089399 systemd-networkd[1811]: cali808c5f05f85: Gained IPv6LL Nov 23 23:03:44.154506 containerd[2006]: time="2025-11-23T23:03:44.154433807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ndcrw,Uid:57db0512-dfe1-4926-96f1-d477506ac2b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2\"" Nov 23 23:03:44.171207 containerd[2006]: time="2025-11-23T23:03:44.170350931Z" level=info msg="CreateContainer within sandbox \"fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:03:44.187438 containerd[2006]: time="2025-11-23T23:03:44.187264499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d6qd7,Uid:eb8960c6-f005-4ea0-b8f6-6850fa0745aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"2bb5f9e9b22380f3bbe779d29dffb6a172ee3c2e7db593c0cc37c29b390bb257\"" Nov 23 23:03:44.193339 containerd[2006]: time="2025-11-23T23:03:44.193272119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:03:44.219834 containerd[2006]: time="2025-11-23T23:03:44.219771383Z" level=info msg="Container f2e8a41badde5aa2a4eb462f230e86b36c4818d1a3ff2a5f1861f3c00f04d9d6: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:03:44.225616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085782001.mount: Deactivated successfully. Nov 23 23:03:44.252190 containerd[2006]: time="2025-11-23T23:03:44.252021887Z" level=info msg="CreateContainer within sandbox \"fc1620983eca07440c5afd56ef3d4052465c6751404d4a11e92a414d785551a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f2e8a41badde5aa2a4eb462f230e86b36c4818d1a3ff2a5f1861f3c00f04d9d6\"" Nov 23 23:03:44.253441 containerd[2006]: time="2025-11-23T23:03:44.253359779Z" level=info msg="StartContainer for \"f2e8a41badde5aa2a4eb462f230e86b36c4818d1a3ff2a5f1861f3c00f04d9d6\"" Nov 23 23:03:44.259412 containerd[2006]: time="2025-11-23T23:03:44.259340495Z" level=info msg="connecting to shim f2e8a41badde5aa2a4eb462f230e86b36c4818d1a3ff2a5f1861f3c00f04d9d6" address="unix:///run/containerd/s/05d11bfa32353c47e71612652b26c61a1afed259a7f840164430a978c64c20f8" protocol=ttrpc version=3 Nov 23 23:03:44.313663 systemd[1]: Started cri-containerd-f2e8a41badde5aa2a4eb462f230e86b36c4818d1a3ff2a5f1861f3c00f04d9d6.scope - libcontainer container f2e8a41badde5aa2a4eb462f230e86b36c4818d1a3ff2a5f1861f3c00f04d9d6. Nov 23 23:03:44.398314 containerd[2006]: time="2025-11-23T23:03:44.398242632Z" level=info msg="StartContainer for \"f2e8a41badde5aa2a4eb462f230e86b36c4818d1a3ff2a5f1861f3c00f04d9d6\" returns successfully" Nov 23 23:03:44.433845 containerd[2006]: time="2025-11-23T23:03:44.433620360Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:44.437160 containerd[2006]: time="2025-11-23T23:03:44.435976404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:03:44.437602 containerd[2006]: time="2025-11-23T23:03:44.436060308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:03:44.438170 kubelet[3321]: E1123 23:03:44.437946 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:03:44.438170 kubelet[3321]: E1123 23:03:44.438006 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:03:44.439449 kubelet[3321]: E1123 23:03:44.439366 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf7bd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:44.443641 containerd[2006]: time="2025-11-23T23:03:44.443558292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:03:44.565408 kubelet[3321]: E1123 23:03:44.565188 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:03:44.639298 kubelet[3321]: I1123 23:03:44.638626 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ndcrw" podStartSLOduration=56.638599045 podStartE2EDuration="56.638599045s" podCreationTimestamp="2025-11-23 23:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:03:44.636992281 +0000 UTC m=+60.931736319" watchObservedRunningTime="2025-11-23 23:03:44.638599045 +0000 UTC m=+60.933343011" Nov 23 23:03:44.687279 containerd[2006]: time="2025-11-23T23:03:44.687188617Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:44.689588 containerd[2006]: time="2025-11-23T23:03:44.689490349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:03:44.690412 containerd[2006]: time="2025-11-23T23:03:44.689556361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:03:44.691434 kubelet[3321]: E1123 23:03:44.691098 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:03:44.692143 kubelet[3321]: E1123 23:03:44.691722 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:03:44.695554 kubelet[3321]: E1123 23:03:44.693987 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf7bd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:44.697473 kubelet[3321]: E1123 23:03:44.697193 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:03:44.795021 systemd-networkd[1811]: caliebe3b9e73a2: Gained IPv6LL Nov 23 23:03:44.855786 systemd-networkd[1811]: vxlan.calico: Link UP Nov 23 23:03:44.855802 systemd-networkd[1811]: vxlan.calico: Gained carrier Nov 23 23:03:44.879993 (udev-worker)[4634]: Network interface NamePolicy= disabled on kernel command line. Nov 23 23:03:45.568058 kubelet[3321]: E1123 23:03:45.567843 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:03:45.754583 systemd-networkd[1811]: calib536813f793: Gained IPv6LL Nov 23 23:03:45.857637 systemd[1]: Started sshd@8-172.31.29.95:22-139.178.89.65:38572.service - OpenSSH per-connection server daemon (139.178.89.65:38572). Nov 23 23:03:46.013621 containerd[2006]: time="2025-11-23T23:03:46.012269328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68fb77858b-7fnfw,Uid:607d6cea-c322-4995-9bb6-13328b249dcf,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:46.015676 containerd[2006]: time="2025-11-23T23:03:46.015309228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596c4fb774-qwzhg,Uid:33a858d5-f639-4092-9d21-043beaa938d2,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:03:46.015676 containerd[2006]: time="2025-11-23T23:03:46.015349044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sjjzv,Uid:d094bbd9-4e37-478d-88c3-aa6e7c244a7b,Namespace:calico-system,Attempt:0,}" Nov 23 23:03:46.073491 systemd-networkd[1811]: vxlan.calico: Gained IPv6LL Nov 23 23:03:46.076807 sshd[5246]: Accepted publickey for core from 139.178.89.65 port 38572 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:03:46.082080 sshd-session[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:03:46.114421 systemd-logind[1979]: New session 9 of user core. Nov 23 23:03:46.121541 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:03:46.593446 sshd[5280]: Connection closed by 139.178.89.65 port 38572 Nov 23 23:03:46.598058 sshd-session[5246]: pam_unix(sshd:session): session closed for user core Nov 23 23:03:46.614567 systemd[1]: sshd@8-172.31.29.95:22-139.178.89.65:38572.service: Deactivated successfully. Nov 23 23:03:46.615351 systemd-logind[1979]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:03:46.622971 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:03:46.634993 systemd-logind[1979]: Removed session 9. Nov 23 23:03:46.719024 systemd-networkd[1811]: cali87b0b16d91d: Link UP Nov 23 23:03:46.720754 systemd-networkd[1811]: cali87b0b16d91d: Gained carrier Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.299 [INFO][5249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0 calico-apiserver-596c4fb774- calico-apiserver 33a858d5-f639-4092-9d21-043beaa938d2 868 0 2025-11-23 23:03:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:596c4fb774 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-95 calico-apiserver-596c4fb774-qwzhg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali87b0b16d91d [] [] }} ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-qwzhg" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.299 [INFO][5249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-qwzhg" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.517 [INFO][5296] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" HandleID="k8s-pod-network.cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Workload="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.518 [INFO][5296] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" HandleID="k8s-pod-network.cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Workload="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000356a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-95", "pod":"calico-apiserver-596c4fb774-qwzhg", "timestamp":"2025-11-23 23:03:46.517901403 +0000 UTC"}, Hostname:"ip-172-31-29-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.519 [INFO][5296] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.520 [INFO][5296] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.521 [INFO][5296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-95' Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.578 [INFO][5296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" host="ip-172-31-29-95" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.596 [INFO][5296] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-95" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.641 [INFO][5296] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.647 [INFO][5296] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.654 [INFO][5296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.654 [INFO][5296] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" host="ip-172-31-29-95" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.661 [INFO][5296] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.674 [INFO][5296] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" host="ip-172-31-29-95" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.689 [INFO][5296] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.197/26] block=192.168.121.192/26 handle="k8s-pod-network.cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" host="ip-172-31-29-95" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.689 [INFO][5296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.197/26] handle="k8s-pod-network.cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" host="ip-172-31-29-95" Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.691 [INFO][5296] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:03:46.783688 containerd[2006]: 2025-11-23 23:03:46.691 [INFO][5296] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.197/26] IPv6=[] ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" HandleID="k8s-pod-network.cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Workload="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" Nov 23 23:03:46.784959 containerd[2006]: 2025-11-23 23:03:46.703 [INFO][5249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-qwzhg" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0", GenerateName:"calico-apiserver-596c4fb774-", Namespace:"calico-apiserver", SelfLink:"", UID:"33a858d5-f639-4092-9d21-043beaa938d2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596c4fb774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"", Pod:"calico-apiserver-596c4fb774-qwzhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87b0b16d91d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:46.784959 containerd[2006]: 2025-11-23 23:03:46.703 [INFO][5249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.197/32] ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-qwzhg" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" Nov 23 23:03:46.784959 containerd[2006]: 2025-11-23 23:03:46.703 [INFO][5249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87b0b16d91d ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-qwzhg" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" Nov 23 23:03:46.784959 containerd[2006]: 2025-11-23 23:03:46.725 [INFO][5249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-qwzhg" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" Nov 23 23:03:46.784959 containerd[2006]: 2025-11-23 23:03:46.734 [INFO][5249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-qwzhg" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0", GenerateName:"calico-apiserver-596c4fb774-", Namespace:"calico-apiserver", SelfLink:"", UID:"33a858d5-f639-4092-9d21-043beaa938d2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596c4fb774", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c", Pod:"calico-apiserver-596c4fb774-qwzhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali87b0b16d91d", MAC:"d6:de:77:8e:ba:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:46.784959 containerd[2006]: 2025-11-23 23:03:46.769 [INFO][5249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" Namespace="calico-apiserver" Pod="calico-apiserver-596c4fb774-qwzhg" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--apiserver--596c4fb774--qwzhg-eth0" Nov 23 23:03:46.894287 containerd[2006]: time="2025-11-23T23:03:46.894221104Z" level=info msg="connecting to shim cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c" address="unix:///run/containerd/s/7b453ca758d57d2e770a5f993ea2b81fcf776ae6c775e690fdc4e53ae27a67f4" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:46.910756 systemd-networkd[1811]: cali4cbf7ae9419: Link UP Nov 23 23:03:46.914221 systemd-networkd[1811]: cali4cbf7ae9419: Gained carrier Nov 23 23:03:47.016767 containerd[2006]: time="2025-11-23T23:03:47.016709017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhdqb,Uid:c89cfe9d-5560-450f-9829-e883ba097ecf,Namespace:kube-system,Attempt:0,}" Nov 23 23:03:47.022895 systemd[1]: Started cri-containerd-cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c.scope - libcontainer container cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c. Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.348 [INFO][5255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0 calico-kube-controllers-68fb77858b- calico-system 607d6cea-c322-4995-9bb6-13328b249dcf 869 0 2025-11-23 23:03:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68fb77858b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-95 calico-kube-controllers-68fb77858b-7fnfw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4cbf7ae9419 [] [] }} ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Namespace="calico-system" Pod="calico-kube-controllers-68fb77858b-7fnfw" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.348 [INFO][5255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Namespace="calico-system" Pod="calico-kube-controllers-68fb77858b-7fnfw" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.531 [INFO][5302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" HandleID="k8s-pod-network.502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Workload="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.532 [INFO][5302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" HandleID="k8s-pod-network.502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Workload="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400025bbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-95", "pod":"calico-kube-controllers-68fb77858b-7fnfw", "timestamp":"2025-11-23 23:03:46.531189735 +0000 UTC"}, Hostname:"ip-172-31-29-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.532 [INFO][5302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.691 [INFO][5302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.692 [INFO][5302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-95' Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.733 [INFO][5302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" host="ip-172-31-29-95" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.765 [INFO][5302] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-95" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.795 [INFO][5302] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.802 [INFO][5302] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.810 [INFO][5302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.811 [INFO][5302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" host="ip-172-31-29-95" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.825 [INFO][5302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.835 [INFO][5302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" host="ip-172-31-29-95" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.869 [INFO][5302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.198/26] block=192.168.121.192/26 handle="k8s-pod-network.502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" host="ip-172-31-29-95" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.869 [INFO][5302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.198/26] handle="k8s-pod-network.502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" host="ip-172-31-29-95" Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.870 [INFO][5302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:03:47.032296 containerd[2006]: 2025-11-23 23:03:46.870 [INFO][5302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.198/26] IPv6=[] ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" HandleID="k8s-pod-network.502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Workload="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" Nov 23 23:03:47.039984 containerd[2006]: 2025-11-23 23:03:46.887 [INFO][5255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Namespace="calico-system" Pod="calico-kube-controllers-68fb77858b-7fnfw" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0", GenerateName:"calico-kube-controllers-68fb77858b-", Namespace:"calico-system", SelfLink:"", UID:"607d6cea-c322-4995-9bb6-13328b249dcf", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68fb77858b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"", Pod:"calico-kube-controllers-68fb77858b-7fnfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4cbf7ae9419", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:47.039984 containerd[2006]: 2025-11-23 23:03:46.887 [INFO][5255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.198/32] ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Namespace="calico-system" Pod="calico-kube-controllers-68fb77858b-7fnfw" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" Nov 23 23:03:47.039984 containerd[2006]: 2025-11-23 23:03:46.887 [INFO][5255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cbf7ae9419 ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Namespace="calico-system" Pod="calico-kube-controllers-68fb77858b-7fnfw" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" Nov 23 23:03:47.039984 containerd[2006]: 2025-11-23 23:03:46.914 [INFO][5255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Namespace="calico-system" Pod="calico-kube-controllers-68fb77858b-7fnfw" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" Nov 23 23:03:47.039984 containerd[2006]: 2025-11-23 23:03:46.921 [INFO][5255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Namespace="calico-system" Pod="calico-kube-controllers-68fb77858b-7fnfw" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0", GenerateName:"calico-kube-controllers-68fb77858b-", Namespace:"calico-system", SelfLink:"", UID:"607d6cea-c322-4995-9bb6-13328b249dcf", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68fb77858b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac", Pod:"calico-kube-controllers-68fb77858b-7fnfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4cbf7ae9419", MAC:"8a:dd:1e:51:d9:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:47.039984 containerd[2006]: 2025-11-23 23:03:46.976 [INFO][5255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" Namespace="calico-system" Pod="calico-kube-controllers-68fb77858b-7fnfw" WorkloadEndpoint="ip--172--31--29--95-k8s-calico--kube--controllers--68fb77858b--7fnfw-eth0" Nov 23 23:03:47.163324 systemd-networkd[1811]: cali55aff965928: Link UP Nov 23 23:03:47.163797 systemd-networkd[1811]: cali55aff965928: Gained carrier Nov 23 23:03:47.225153 containerd[2006]: time="2025-11-23T23:03:47.224905034Z" level=info msg="connecting to shim 502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac" address="unix:///run/containerd/s/a17771e0edf0d59d0cc14d689829d3fccdbdb9465de3f6bf02c123b595dcf235" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.430 [INFO][5262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0 goldmane-666569f655- calico-system d094bbd9-4e37-478d-88c3-aa6e7c244a7b 866 0 2025-11-23 23:03:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-29-95 goldmane-666569f655-sjjzv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali55aff965928 [] [] }} ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Namespace="calico-system" Pod="goldmane-666569f655-sjjzv" WorkloadEndpoint="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.431 [INFO][5262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Namespace="calico-system" Pod="goldmane-666569f655-sjjzv" WorkloadEndpoint="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.624 [INFO][5309] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" HandleID="k8s-pod-network.4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Workload="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.625 [INFO][5309] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" HandleID="k8s-pod-network.4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Workload="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001dd200), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-95", "pod":"goldmane-666569f655-sjjzv", "timestamp":"2025-11-23 23:03:46.624674427 +0000 UTC"}, Hostname:"ip-172-31-29-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.637 [INFO][5309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.870 [INFO][5309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.871 [INFO][5309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-95' Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.920 [INFO][5309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" host="ip-172-31-29-95" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.945 [INFO][5309] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-95" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.969 [INFO][5309] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:46.978 [INFO][5309] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:47.018 [INFO][5309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:47.019 [INFO][5309] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" host="ip-172-31-29-95" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:47.035 [INFO][5309] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76 Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:47.066 [INFO][5309] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" host="ip-172-31-29-95" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:47.093 [INFO][5309] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.199/26] block=192.168.121.192/26 handle="k8s-pod-network.4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" host="ip-172-31-29-95" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:47.093 [INFO][5309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.199/26] handle="k8s-pod-network.4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" host="ip-172-31-29-95" Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:47.094 [INFO][5309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:03:47.245813 containerd[2006]: 2025-11-23 23:03:47.096 [INFO][5309] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.199/26] IPv6=[] ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" HandleID="k8s-pod-network.4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Workload="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" Nov 23 23:03:47.247005 containerd[2006]: 2025-11-23 23:03:47.137 [INFO][5262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Namespace="calico-system" Pod="goldmane-666569f655-sjjzv" WorkloadEndpoint="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d094bbd9-4e37-478d-88c3-aa6e7c244a7b", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"", Pod:"goldmane-666569f655-sjjzv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali55aff965928", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:47.247005 containerd[2006]: 2025-11-23 23:03:47.139 [INFO][5262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.199/32] ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Namespace="calico-system" Pod="goldmane-666569f655-sjjzv" WorkloadEndpoint="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" Nov 23 23:03:47.247005 containerd[2006]: 2025-11-23 23:03:47.139 [INFO][5262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55aff965928 ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Namespace="calico-system" Pod="goldmane-666569f655-sjjzv" WorkloadEndpoint="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" Nov 23 23:03:47.247005 containerd[2006]: 2025-11-23 23:03:47.156 [INFO][5262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Namespace="calico-system" Pod="goldmane-666569f655-sjjzv" WorkloadEndpoint="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" Nov 23 23:03:47.247005 containerd[2006]: 2025-11-23 23:03:47.169 [INFO][5262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Namespace="calico-system" Pod="goldmane-666569f655-sjjzv" WorkloadEndpoint="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d094bbd9-4e37-478d-88c3-aa6e7c244a7b", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 3, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76", Pod:"goldmane-666569f655-sjjzv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali55aff965928", MAC:"da:a6:8b:03:89:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:47.247005 containerd[2006]: 2025-11-23 23:03:47.229 [INFO][5262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" Namespace="calico-system" Pod="goldmane-666569f655-sjjzv" WorkloadEndpoint="ip--172--31--29--95-k8s-goldmane--666569f655--sjjzv-eth0" Nov 23 23:03:47.330465 systemd[1]: Started cri-containerd-502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac.scope - libcontainer container 502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac. Nov 23 23:03:47.404811 containerd[2006]: time="2025-11-23T23:03:47.404627079Z" level=info msg="connecting to shim 4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76" address="unix:///run/containerd/s/b9ea5d50b592f779a6a1dd08c9e668d600c109ad96be1e6a588208e99f2f0444" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:47.512449 systemd[1]: Started cri-containerd-4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76.scope - libcontainer container 4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76. Nov 23 23:03:47.600577 containerd[2006]: time="2025-11-23T23:03:47.600180928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596c4fb774-qwzhg,Uid:33a858d5-f639-4092-9d21-043beaa938d2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cec00071dc08264fe972458de2d869c47fec69d63387134f217a16ef38cbcc7c\"" Nov 23 23:03:47.608891 containerd[2006]: time="2025-11-23T23:03:47.608818804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:03:47.710487 systemd-networkd[1811]: calie1818a1a959: Link UP Nov 23 23:03:47.714437 systemd-networkd[1811]: calie1818a1a959: Gained carrier Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.372 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0 coredns-668d6bf9bc- kube-system c89cfe9d-5560-450f-9829-e883ba097ecf 856 0 2025-11-23 23:02:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-95 coredns-668d6bf9bc-fhdqb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1818a1a959 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhdqb" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.372 [INFO][5366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhdqb" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.552 [INFO][5442] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" HandleID="k8s-pod-network.ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Workload="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.552 [INFO][5442] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" HandleID="k8s-pod-network.ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Workload="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b910), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-95", "pod":"coredns-668d6bf9bc-fhdqb", "timestamp":"2025-11-23 23:03:47.552184576 +0000 UTC"}, Hostname:"ip-172-31-29-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.553 [INFO][5442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.553 [INFO][5442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.553 [INFO][5442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-95' Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.587 [INFO][5442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" host="ip-172-31-29-95" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.602 [INFO][5442] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-29-95" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.624 [INFO][5442] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.636 [INFO][5442] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.652 [INFO][5442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ip-172-31-29-95" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.652 [INFO][5442] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" host="ip-172-31-29-95" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.659 [INFO][5442] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.674 [INFO][5442] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" host="ip-172-31-29-95" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.690 [INFO][5442] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.200/26] block=192.168.121.192/26 handle="k8s-pod-network.ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" host="ip-172-31-29-95" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.690 [INFO][5442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.200/26] handle="k8s-pod-network.ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" host="ip-172-31-29-95" Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.690 [INFO][5442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:03:47.771588 containerd[2006]: 2025-11-23 23:03:47.690 [INFO][5442] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.200/26] IPv6=[] ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" HandleID="k8s-pod-network.ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Workload="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" Nov 23 23:03:47.775413 containerd[2006]: 2025-11-23 23:03:47.696 [INFO][5366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhdqb" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c89cfe9d-5560-450f-9829-e883ba097ecf", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 2, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"", Pod:"coredns-668d6bf9bc-fhdqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1818a1a959", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:47.775413 containerd[2006]: 2025-11-23 23:03:47.696 [INFO][5366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.200/32] ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhdqb" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" Nov 23 23:03:47.775413 containerd[2006]: 2025-11-23 23:03:47.697 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1818a1a959 ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhdqb" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" Nov 23 23:03:47.775413 containerd[2006]: 2025-11-23 23:03:47.717 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhdqb" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" Nov 23 23:03:47.775413 containerd[2006]: 2025-11-23 23:03:47.720 [INFO][5366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhdqb" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c89cfe9d-5560-450f-9829-e883ba097ecf", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 2, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-95", ContainerID:"ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c", Pod:"coredns-668d6bf9bc-fhdqb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1818a1a959", MAC:"36:dd:92:94:a6:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:03:47.775413 containerd[2006]: 2025-11-23 23:03:47.754 [INFO][5366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" Namespace="kube-system" Pod="coredns-668d6bf9bc-fhdqb" WorkloadEndpoint="ip--172--31--29--95-k8s-coredns--668d6bf9bc--fhdqb-eth0" Nov 23 23:03:47.891411 containerd[2006]: time="2025-11-23T23:03:47.891318713Z" level=info msg="connecting to shim ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c" address="unix:///run/containerd/s/0fc803b594dcec685cf3d212fdaff452ab2fba0b6e9f14c8824dbfc18ee04666" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:03:47.900218 containerd[2006]: time="2025-11-23T23:03:47.900104585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68fb77858b-7fnfw,Uid:607d6cea-c322-4995-9bb6-13328b249dcf,Namespace:calico-system,Attempt:0,} returns sandbox id \"502e201049bfd3eba3eb26ca7b2a6928f6d7787968187cd621ade2b17b211aac\"" Nov 23 23:03:47.909094 containerd[2006]: time="2025-11-23T23:03:47.907515977Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:47.910800 containerd[2006]: time="2025-11-23T23:03:47.910671185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:47.911074 containerd[2006]: time="2025-11-23T23:03:47.910868021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:03:47.912174 kubelet[3321]: E1123 23:03:47.911972 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:47.912174 kubelet[3321]: E1123 23:03:47.912150 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:47.913946 kubelet[3321]: E1123 23:03:47.912484 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqlth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596c4fb774-qwzhg_calico-apiserver(33a858d5-f639-4092-9d21-043beaa938d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:47.916254 kubelet[3321]: E1123 23:03:47.915769 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:03:47.917436 containerd[2006]: time="2025-11-23T23:03:47.917366837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:03:47.944334 containerd[2006]: time="2025-11-23T23:03:47.944263866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-sjjzv,Uid:d094bbd9-4e37-478d-88c3-aa6e7c244a7b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ca426bba3216a3e409a5562a41b1d94d2e7fadc535053dd341f7b4951158f76\"" Nov 23 23:03:47.991607 systemd[1]: Started cri-containerd-ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c.scope - libcontainer container ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c. Nov 23 23:03:48.058584 systemd-networkd[1811]: cali87b0b16d91d: Gained IPv6LL Nov 23 23:03:48.127298 containerd[2006]: time="2025-11-23T23:03:48.127199498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fhdqb,Uid:c89cfe9d-5560-450f-9829-e883ba097ecf,Namespace:kube-system,Attempt:0,} returns sandbox id \"ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c\"" Nov 23 23:03:48.140244 containerd[2006]: time="2025-11-23T23:03:48.140168199Z" level=info msg="CreateContainer within sandbox \"ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:03:48.167239 containerd[2006]: time="2025-11-23T23:03:48.167180643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:48.168508 containerd[2006]: time="2025-11-23T23:03:48.168337227Z" level=info msg="Container e05195f003e5b21d0acd3b19cdf0c9a03435b919067caa8185ae1a50dc89971f: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:03:48.171108 containerd[2006]: time="2025-11-23T23:03:48.171027075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:03:48.173324 containerd[2006]: time="2025-11-23T23:03:48.173222847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:03:48.175798 kubelet[3321]: E1123 23:03:48.175709 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:03:48.175932 kubelet[3321]: E1123 23:03:48.175789 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:03:48.176978 containerd[2006]: time="2025-11-23T23:03:48.176837715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:03:48.181341 kubelet[3321]: E1123 23:03:48.176105 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkwx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68fb77858b-7fnfw_calico-system(607d6cea-c322-4995-9bb6-13328b249dcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:48.182681 kubelet[3321]: E1123 23:03:48.180092 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:03:48.209991 containerd[2006]: time="2025-11-23T23:03:48.209483931Z" level=info msg="CreateContainer within sandbox \"ace1c8b39aa1d1cc55f1c622e4cefb331fd6e065d931d4944c93b50cac22829c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e05195f003e5b21d0acd3b19cdf0c9a03435b919067caa8185ae1a50dc89971f\"" Nov 23 23:03:48.214239 containerd[2006]: time="2025-11-23T23:03:48.212520807Z" level=info msg="StartContainer for \"e05195f003e5b21d0acd3b19cdf0c9a03435b919067caa8185ae1a50dc89971f\"" Nov 23 23:03:48.220413 containerd[2006]: time="2025-11-23T23:03:48.220352967Z" level=info msg="connecting to shim e05195f003e5b21d0acd3b19cdf0c9a03435b919067caa8185ae1a50dc89971f" address="unix:///run/containerd/s/0fc803b594dcec685cf3d212fdaff452ab2fba0b6e9f14c8824dbfc18ee04666" protocol=ttrpc version=3 Nov 23 23:03:48.270512 systemd[1]: Started cri-containerd-e05195f003e5b21d0acd3b19cdf0c9a03435b919067caa8185ae1a50dc89971f.scope - libcontainer container e05195f003e5b21d0acd3b19cdf0c9a03435b919067caa8185ae1a50dc89971f. Nov 23 23:03:48.346272 containerd[2006]: time="2025-11-23T23:03:48.345958996Z" level=info msg="StartContainer for \"e05195f003e5b21d0acd3b19cdf0c9a03435b919067caa8185ae1a50dc89971f\" returns successfully" Nov 23 23:03:48.423852 containerd[2006]: time="2025-11-23T23:03:48.423716332Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:48.426403 containerd[2006]: time="2025-11-23T23:03:48.426303280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:03:48.426603 containerd[2006]: time="2025-11-23T23:03:48.426458728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:48.426974 kubelet[3321]: E1123 23:03:48.426859 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:03:48.427086 kubelet[3321]: E1123 23:03:48.426996 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:03:48.428096 kubelet[3321]: E1123 23:03:48.427830 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxk77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sjjzv_calico-system(d094bbd9-4e37-478d-88c3-aa6e7c244a7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:48.429595 kubelet[3321]: E1123 23:03:48.429471 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:03:48.505367 systemd-networkd[1811]: cali55aff965928: Gained IPv6LL Nov 23 23:03:48.596478 kubelet[3321]: E1123 23:03:48.596257 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:03:48.606560 kubelet[3321]: E1123 23:03:48.606164 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:03:48.609704 kubelet[3321]: E1123 23:03:48.608576 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:03:48.688718 kubelet[3321]: I1123 23:03:48.688602 3321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fhdqb" podStartSLOduration=61.688575821 podStartE2EDuration="1m1.688575821s" podCreationTimestamp="2025-11-23 23:02:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:03:48.687462365 +0000 UTC m=+64.982206331" watchObservedRunningTime="2025-11-23 23:03:48.688575821 +0000 UTC m=+64.983319775" Nov 23 23:03:48.763713 systemd-networkd[1811]: cali4cbf7ae9419: Gained IPv6LL Nov 23 23:03:49.145359 systemd-networkd[1811]: calie1818a1a959: Gained IPv6LL Nov 23 23:03:49.627872 kubelet[3321]: E1123 23:03:49.627620 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:03:49.632348 kubelet[3321]: E1123 23:03:49.632101 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:03:49.633821 kubelet[3321]: E1123 23:03:49.633741 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:03:51.635735 systemd[1]: Started sshd@9-172.31.29.95:22-139.178.89.65:47496.service - OpenSSH per-connection server daemon (139.178.89.65:47496). Nov 23 23:03:51.868517 sshd[5614]: Accepted publickey for core from 139.178.89.65 port 47496 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:03:51.871904 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:03:51.883855 systemd-logind[1979]: New session 10 of user core. Nov 23 23:03:51.887436 ntpd[2187]: Listen normally on 6 vxlan.calico 192.168.121.192:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 6 vxlan.calico 192.168.121.192:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 7 calid1157bd9a50 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 8 cali808c5f05f85 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 9 caliebe3b9e73a2 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 10 calib536813f793 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 11 vxlan.calico [fe80::646e:69ff:fe43:959a%8]:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 12 cali87b0b16d91d [fe80::ecee:eeff:feee:eeee%11]:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 13 cali4cbf7ae9419 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 14 cali55aff965928 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 23 23:03:51.890287 ntpd[2187]: 23 Nov 23:03:51 ntpd[2187]: Listen normally on 15 calie1818a1a959 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 23 23:03:51.888224 ntpd[2187]: Listen normally on 7 calid1157bd9a50 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 23 23:03:51.888305 ntpd[2187]: Listen normally on 8 cali808c5f05f85 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 23 23:03:51.888356 ntpd[2187]: Listen normally on 9 caliebe3b9e73a2 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 23 23:03:51.888406 ntpd[2187]: Listen normally on 10 calib536813f793 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 23 23:03:51.888456 ntpd[2187]: Listen normally on 11 vxlan.calico [fe80::646e:69ff:fe43:959a%8]:123 Nov 23 23:03:51.888507 ntpd[2187]: Listen normally on 12 cali87b0b16d91d [fe80::ecee:eeff:feee:eeee%11]:123 Nov 23 23:03:51.888555 ntpd[2187]: Listen normally on 13 cali4cbf7ae9419 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 23 23:03:51.888599 ntpd[2187]: Listen normally on 14 cali55aff965928 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 23 23:03:51.888644 ntpd[2187]: Listen normally on 15 calie1818a1a959 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 23 23:03:51.895448 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:03:52.178479 sshd[5617]: Connection closed by 139.178.89.65 port 47496 Nov 23 23:03:52.178899 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Nov 23 23:03:52.190646 systemd-logind[1979]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:03:52.191331 systemd[1]: sshd@9-172.31.29.95:22-139.178.89.65:47496.service: Deactivated successfully. Nov 23 23:03:52.201597 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:03:52.231687 systemd-logind[1979]: Removed session 10. Nov 23 23:03:52.234651 systemd[1]: Started sshd@10-172.31.29.95:22-139.178.89.65:47512.service - OpenSSH per-connection server daemon (139.178.89.65:47512). Nov 23 23:03:52.440682 sshd[5630]: Accepted publickey for core from 139.178.89.65 port 47512 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:03:52.443482 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:03:52.451942 systemd-logind[1979]: New session 11 of user core. Nov 23 23:03:52.459423 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:03:52.814709 sshd[5633]: Connection closed by 139.178.89.65 port 47512 Nov 23 23:03:52.818403 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Nov 23 23:03:52.827967 systemd[1]: sshd@10-172.31.29.95:22-139.178.89.65:47512.service: Deactivated successfully. Nov 23 23:03:52.837712 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:03:52.847880 systemd-logind[1979]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:03:52.873240 systemd[1]: Started sshd@11-172.31.29.95:22-139.178.89.65:47518.service - OpenSSH per-connection server daemon (139.178.89.65:47518). Nov 23 23:03:52.878071 systemd-logind[1979]: Removed session 11. Nov 23 23:03:53.091427 sshd[5643]: Accepted publickey for core from 139.178.89.65 port 47518 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:03:53.094506 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:03:53.106596 systemd-logind[1979]: New session 12 of user core. Nov 23 23:03:53.117471 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:03:53.423982 sshd[5648]: Connection closed by 139.178.89.65 port 47518 Nov 23 23:03:53.425033 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Nov 23 23:03:53.437418 systemd-logind[1979]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:03:53.438731 systemd[1]: sshd@11-172.31.29.95:22-139.178.89.65:47518.service: Deactivated successfully. Nov 23 23:03:53.445776 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:03:53.453236 systemd-logind[1979]: Removed session 12. Nov 23 23:03:55.014500 containerd[2006]: time="2025-11-23T23:03:55.014409285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:03:55.285736 containerd[2006]: time="2025-11-23T23:03:55.285539014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:55.288957 containerd[2006]: time="2025-11-23T23:03:55.288855910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:03:55.289184 containerd[2006]: time="2025-11-23T23:03:55.288867862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:03:55.289363 kubelet[3321]: E1123 23:03:55.289282 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:03:55.289932 kubelet[3321]: E1123 23:03:55.289363 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:03:55.289932 kubelet[3321]: E1123 23:03:55.289520 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f58e4ff569304c459c01f849858ad86b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cchds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bb99cb694-xrvwh_calico-system(77afc798-8fc5-43e1-9a7e-049f9b28d8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:55.294029 containerd[2006]: time="2025-11-23T23:03:55.293939254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:03:55.595222 containerd[2006]: time="2025-11-23T23:03:55.595033620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:55.597909 containerd[2006]: time="2025-11-23T23:03:55.597792840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:03:55.597909 containerd[2006]: time="2025-11-23T23:03:55.597869136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:03:55.598187 kubelet[3321]: E1123 23:03:55.598111 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:03:55.598290 kubelet[3321]: E1123 23:03:55.598209 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:03:55.598428 kubelet[3321]: E1123 23:03:55.598355 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cchds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bb99cb694-xrvwh_calico-system(77afc798-8fc5-43e1-9a7e-049f9b28d8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:55.599636 kubelet[3321]: E1123 23:03:55.599533 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:03:57.014915 containerd[2006]: time="2025-11-23T23:03:57.014702279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:03:57.298292 containerd[2006]: time="2025-11-23T23:03:57.298033596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:57.300856 containerd[2006]: time="2025-11-23T23:03:57.300770760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:03:57.300994 containerd[2006]: time="2025-11-23T23:03:57.300924996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:03:57.301305 kubelet[3321]: E1123 23:03:57.301174 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:03:57.301305 kubelet[3321]: E1123 23:03:57.301242 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:03:57.302485 kubelet[3321]: E1123 23:03:57.301452 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf7bd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:57.305606 containerd[2006]: time="2025-11-23T23:03:57.305528040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:03:57.575273 containerd[2006]: time="2025-11-23T23:03:57.575040493Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:57.577896 containerd[2006]: time="2025-11-23T23:03:57.577708681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:03:57.577896 containerd[2006]: time="2025-11-23T23:03:57.577799749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:03:57.578334 kubelet[3321]: E1123 23:03:57.578264 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:03:57.578493 kubelet[3321]: E1123 23:03:57.578350 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:03:57.578717 kubelet[3321]: E1123 23:03:57.578527 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf7bd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:57.580801 kubelet[3321]: E1123 23:03:57.580712 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:03:58.464516 systemd[1]: Started sshd@12-172.31.29.95:22-139.178.89.65:47522.service - OpenSSH per-connection server daemon (139.178.89.65:47522). Nov 23 23:03:58.685024 sshd[5668]: Accepted publickey for core from 139.178.89.65 port 47522 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:03:58.688256 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:03:58.708980 systemd-logind[1979]: New session 13 of user core. Nov 23 23:03:58.718834 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:03:59.003215 sshd[5671]: Connection closed by 139.178.89.65 port 47522 Nov 23 23:03:59.001675 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Nov 23 23:03:59.013191 systemd[1]: sshd@12-172.31.29.95:22-139.178.89.65:47522.service: Deactivated successfully. Nov 23 23:03:59.018154 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:03:59.021391 systemd-logind[1979]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:03:59.025963 systemd-logind[1979]: Removed session 13. Nov 23 23:04:00.017174 containerd[2006]: time="2025-11-23T23:04:00.015688094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:04:00.295524 containerd[2006]: time="2025-11-23T23:04:00.295059435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:00.297758 containerd[2006]: time="2025-11-23T23:04:00.297599043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:04:00.297758 containerd[2006]: time="2025-11-23T23:04:00.297677655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:04:00.298018 kubelet[3321]: E1123 23:04:00.297944 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:04:00.299058 kubelet[3321]: E1123 23:04:00.298020 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:04:00.299058 kubelet[3321]: E1123 23:04:00.298229 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhmg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596c4fb774-szwps_calico-apiserver(4be32920-a592-41ee-b676-15a5a370b665): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:00.300240 kubelet[3321]: E1123 23:04:00.300172 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:04:02.015348 containerd[2006]: time="2025-11-23T23:04:02.015088095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:04:02.276091 containerd[2006]: time="2025-11-23T23:04:02.275927573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:02.278316 containerd[2006]: time="2025-11-23T23:04:02.278237753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:04:02.278491 containerd[2006]: time="2025-11-23T23:04:02.278282489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:04:02.278875 kubelet[3321]: E1123 23:04:02.278756 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:04:02.279790 kubelet[3321]: E1123 23:04:02.278843 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:04:02.280858 kubelet[3321]: E1123 23:04:02.280644 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxk77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sjjzv_calico-system(d094bbd9-4e37-478d-88c3-aa6e7c244a7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:02.282779 kubelet[3321]: E1123 23:04:02.282667 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:04:03.013786 containerd[2006]: time="2025-11-23T23:04:03.013652380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:04:03.295024 containerd[2006]: time="2025-11-23T23:04:03.294876270Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:03.297784 containerd[2006]: time="2025-11-23T23:04:03.297673446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:04:03.298023 containerd[2006]: time="2025-11-23T23:04:03.297768234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:04:03.298679 kubelet[3321]: E1123 23:04:03.298394 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:04:03.298679 kubelet[3321]: E1123 23:04:03.298462 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:04:03.300153 kubelet[3321]: E1123 23:04:03.299314 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkwx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68fb77858b-7fnfw_calico-system(607d6cea-c322-4995-9bb6-13328b249dcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:03.300368 containerd[2006]: time="2025-11-23T23:04:03.298919478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:04:03.301250 kubelet[3321]: E1123 23:04:03.301180 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:04:03.586077 containerd[2006]: time="2025-11-23T23:04:03.585922735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:03.588307 containerd[2006]: time="2025-11-23T23:04:03.588227947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:04:03.588486 containerd[2006]: time="2025-11-23T23:04:03.588367627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:04:03.588693 kubelet[3321]: E1123 23:04:03.588629 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:04:03.588792 kubelet[3321]: E1123 23:04:03.588704 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:04:03.588948 kubelet[3321]: E1123 23:04:03.588871 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqlth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596c4fb774-qwzhg_calico-apiserver(33a858d5-f639-4092-9d21-043beaa938d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:03.590846 kubelet[3321]: E1123 23:04:03.590777 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:04:04.040584 systemd[1]: Started sshd@13-172.31.29.95:22-139.178.89.65:57610.service - OpenSSH per-connection server daemon (139.178.89.65:57610). Nov 23 23:04:04.239168 sshd[5687]: Accepted publickey for core from 139.178.89.65 port 57610 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:04.244391 sshd-session[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:04.259935 systemd-logind[1979]: New session 14 of user core. Nov 23 23:04:04.268419 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:04:04.534376 sshd[5690]: Connection closed by 139.178.89.65 port 57610 Nov 23 23:04:04.534853 sshd-session[5687]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:04.542674 systemd[1]: sshd@13-172.31.29.95:22-139.178.89.65:57610.service: Deactivated successfully. Nov 23 23:04:04.547674 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:04:04.550858 systemd-logind[1979]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:04:04.554267 systemd-logind[1979]: Removed session 14. Nov 23 23:04:07.014446 kubelet[3321]: E1123 23:04:07.014240 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:04:09.572213 systemd[1]: Started sshd@14-172.31.29.95:22-139.178.89.65:41062.service - OpenSSH per-connection server daemon (139.178.89.65:41062). Nov 23 23:04:09.773847 sshd[5711]: Accepted publickey for core from 139.178.89.65 port 41062 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:09.776993 sshd-session[5711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:09.785816 systemd-logind[1979]: New session 15 of user core. Nov 23 23:04:09.791402 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:04:10.049925 sshd[5714]: Connection closed by 139.178.89.65 port 41062 Nov 23 23:04:10.050862 sshd-session[5711]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:10.058535 systemd[1]: sshd@14-172.31.29.95:22-139.178.89.65:41062.service: Deactivated successfully. Nov 23 23:04:10.062039 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:04:10.064793 systemd-logind[1979]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:04:10.069218 systemd-logind[1979]: Removed session 15. Nov 23 23:04:11.015079 kubelet[3321]: E1123 23:04:11.014860 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:04:14.021309 kubelet[3321]: E1123 23:04:14.021211 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:04:15.013060 kubelet[3321]: E1123 23:04:15.012880 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:04:15.097489 systemd[1]: Started sshd@15-172.31.29.95:22-139.178.89.65:41068.service - OpenSSH per-connection server daemon (139.178.89.65:41068). Nov 23 23:04:15.309716 sshd[5753]: Accepted publickey for core from 139.178.89.65 port 41068 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:15.312825 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:15.322777 systemd-logind[1979]: New session 16 of user core. Nov 23 23:04:15.331508 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:04:15.605993 sshd[5756]: Connection closed by 139.178.89.65 port 41068 Nov 23 23:04:15.607229 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:15.614800 systemd[1]: sshd@15-172.31.29.95:22-139.178.89.65:41068.service: Deactivated successfully. Nov 23 23:04:15.619852 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:04:15.622270 systemd-logind[1979]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:04:15.626076 systemd-logind[1979]: Removed session 16. Nov 23 23:04:15.643308 systemd[1]: Started sshd@16-172.31.29.95:22-139.178.89.65:41084.service - OpenSSH per-connection server daemon (139.178.89.65:41084). Nov 23 23:04:15.848933 sshd[5767]: Accepted publickey for core from 139.178.89.65 port 41084 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:15.851239 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:15.859600 systemd-logind[1979]: New session 17 of user core. Nov 23 23:04:15.873411 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:04:16.445016 sshd[5770]: Connection closed by 139.178.89.65 port 41084 Nov 23 23:04:16.445918 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:16.453688 systemd-logind[1979]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:04:16.454827 systemd[1]: sshd@16-172.31.29.95:22-139.178.89.65:41084.service: Deactivated successfully. Nov 23 23:04:16.460734 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:04:16.464383 systemd-logind[1979]: Removed session 17. Nov 23 23:04:16.484937 systemd[1]: Started sshd@17-172.31.29.95:22-139.178.89.65:41088.service - OpenSSH per-connection server daemon (139.178.89.65:41088). Nov 23 23:04:16.685492 sshd[5780]: Accepted publickey for core from 139.178.89.65 port 41088 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:16.687848 sshd-session[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:16.697344 systemd-logind[1979]: New session 18 of user core. Nov 23 23:04:16.713431 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:04:17.970346 sshd[5783]: Connection closed by 139.178.89.65 port 41088 Nov 23 23:04:17.971802 sshd-session[5780]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:17.982957 systemd[1]: sshd@17-172.31.29.95:22-139.178.89.65:41088.service: Deactivated successfully. Nov 23 23:04:17.994382 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:04:18.000079 systemd-logind[1979]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:04:18.020833 kubelet[3321]: E1123 23:04:18.020550 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:04:18.024609 kubelet[3321]: E1123 23:04:18.024536 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:04:18.032646 systemd[1]: Started sshd@18-172.31.29.95:22-139.178.89.65:41102.service - OpenSSH per-connection server daemon (139.178.89.65:41102). Nov 23 23:04:18.043276 systemd-logind[1979]: Removed session 18. Nov 23 23:04:18.276771 sshd[5800]: Accepted publickey for core from 139.178.89.65 port 41102 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:18.279712 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:18.287764 systemd-logind[1979]: New session 19 of user core. Nov 23 23:04:18.298427 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:04:18.838207 sshd[5804]: Connection closed by 139.178.89.65 port 41102 Nov 23 23:04:18.838824 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:18.851917 systemd[1]: sshd@18-172.31.29.95:22-139.178.89.65:41102.service: Deactivated successfully. Nov 23 23:04:18.857835 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:04:18.861640 systemd-logind[1979]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:04:18.877667 systemd[1]: Started sshd@19-172.31.29.95:22-139.178.89.65:41116.service - OpenSSH per-connection server daemon (139.178.89.65:41116). Nov 23 23:04:18.881034 systemd-logind[1979]: Removed session 19. Nov 23 23:04:19.080414 sshd[5814]: Accepted publickey for core from 139.178.89.65 port 41116 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:19.082653 sshd-session[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:19.091714 systemd-logind[1979]: New session 20 of user core. Nov 23 23:04:19.101420 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 23:04:19.354760 sshd[5817]: Connection closed by 139.178.89.65 port 41116 Nov 23 23:04:19.355604 sshd-session[5814]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:19.363685 systemd-logind[1979]: Session 20 logged out. Waiting for processes to exit. Nov 23 23:04:19.365606 systemd[1]: sshd@19-172.31.29.95:22-139.178.89.65:41116.service: Deactivated successfully. Nov 23 23:04:19.370826 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 23:04:19.375729 systemd-logind[1979]: Removed session 20. Nov 23 23:04:21.016354 containerd[2006]: time="2025-11-23T23:04:21.015242998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:04:21.269256 containerd[2006]: time="2025-11-23T23:04:21.269054003Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:21.271747 containerd[2006]: time="2025-11-23T23:04:21.271663187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:04:21.271880 containerd[2006]: time="2025-11-23T23:04:21.271804535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:04:21.272273 kubelet[3321]: E1123 23:04:21.272076 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:04:21.272889 kubelet[3321]: E1123 23:04:21.272361 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:04:21.272889 kubelet[3321]: E1123 23:04:21.272656 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f58e4ff569304c459c01f849858ad86b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cchds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bb99cb694-xrvwh_calico-system(77afc798-8fc5-43e1-9a7e-049f9b28d8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:21.276102 containerd[2006]: time="2025-11-23T23:04:21.276037295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:04:21.522695 containerd[2006]: time="2025-11-23T23:04:21.522460728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:21.525536 containerd[2006]: time="2025-11-23T23:04:21.525444852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:04:21.525733 containerd[2006]: time="2025-11-23T23:04:21.525581184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:04:21.525842 kubelet[3321]: E1123 23:04:21.525769 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:04:21.525925 kubelet[3321]: E1123 23:04:21.525841 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:04:21.527512 kubelet[3321]: E1123 23:04:21.526691 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cchds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bb99cb694-xrvwh_calico-system(77afc798-8fc5-43e1-9a7e-049f9b28d8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:21.528048 kubelet[3321]: E1123 23:04:21.527968 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:04:23.014625 containerd[2006]: time="2025-11-23T23:04:23.014291880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:04:23.289232 containerd[2006]: time="2025-11-23T23:04:23.289043053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:23.291914 containerd[2006]: time="2025-11-23T23:04:23.291744205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:04:23.291914 containerd[2006]: time="2025-11-23T23:04:23.291825277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:04:23.293721 kubelet[3321]: E1123 23:04:23.293305 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:04:23.293721 kubelet[3321]: E1123 23:04:23.293365 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:04:23.293721 kubelet[3321]: E1123 23:04:23.293559 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf7bd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:23.296412 containerd[2006]: time="2025-11-23T23:04:23.296345725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:04:23.570605 containerd[2006]: time="2025-11-23T23:04:23.570306831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:23.572682 containerd[2006]: time="2025-11-23T23:04:23.572491071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:04:23.572682 containerd[2006]: time="2025-11-23T23:04:23.572624559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:04:23.573016 kubelet[3321]: E1123 23:04:23.572951 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:04:23.573103 kubelet[3321]: E1123 23:04:23.573045 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:04:23.574198 kubelet[3321]: E1123 23:04:23.574052 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf7bd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:23.575947 kubelet[3321]: E1123 23:04:23.575737 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:04:24.392721 systemd[1]: Started sshd@20-172.31.29.95:22-139.178.89.65:51868.service - OpenSSH per-connection server daemon (139.178.89.65:51868). Nov 23 23:04:24.594299 sshd[5831]: Accepted publickey for core from 139.178.89.65 port 51868 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:24.596798 sshd-session[5831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:24.605965 systemd-logind[1979]: New session 21 of user core. Nov 23 23:04:24.613392 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 23 23:04:24.908201 sshd[5834]: Connection closed by 139.178.89.65 port 51868 Nov 23 23:04:24.909432 sshd-session[5831]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:24.918944 systemd[1]: sshd@20-172.31.29.95:22-139.178.89.65:51868.service: Deactivated successfully. Nov 23 23:04:24.926176 systemd[1]: session-21.scope: Deactivated successfully. Nov 23 23:04:24.931996 systemd-logind[1979]: Session 21 logged out. Waiting for processes to exit. Nov 23 23:04:24.937613 systemd-logind[1979]: Removed session 21. Nov 23 23:04:29.016897 containerd[2006]: time="2025-11-23T23:04:29.016821582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:04:29.278536 containerd[2006]: time="2025-11-23T23:04:29.278186323Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:29.280773 containerd[2006]: time="2025-11-23T23:04:29.280651123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:04:29.280773 containerd[2006]: time="2025-11-23T23:04:29.280729519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:04:29.281073 kubelet[3321]: E1123 23:04:29.281004 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:04:29.281932 kubelet[3321]: E1123 23:04:29.281080 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:04:29.282476 kubelet[3321]: E1123 23:04:29.282302 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhmg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596c4fb774-szwps_calico-apiserver(4be32920-a592-41ee-b676-15a5a370b665): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:29.284362 kubelet[3321]: E1123 23:04:29.284280 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:04:29.948628 systemd[1]: Started sshd@21-172.31.29.95:22-139.178.89.65:60752.service - OpenSSH per-connection server daemon (139.178.89.65:60752). Nov 23 23:04:30.019032 containerd[2006]: time="2025-11-23T23:04:30.017897071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:04:30.159178 sshd[5856]: Accepted publickey for core from 139.178.89.65 port 60752 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:30.161381 sshd-session[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:30.169797 systemd-logind[1979]: New session 22 of user core. Nov 23 23:04:30.179414 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 23 23:04:30.297423 containerd[2006]: time="2025-11-23T23:04:30.297108260Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:30.299802 containerd[2006]: time="2025-11-23T23:04:30.299580308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:04:30.300234 containerd[2006]: time="2025-11-23T23:04:30.299651864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:04:30.300359 kubelet[3321]: E1123 23:04:30.300272 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:04:30.302847 kubelet[3321]: E1123 23:04:30.300355 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:04:30.302847 kubelet[3321]: E1123 23:04:30.300554 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkwx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68fb77858b-7fnfw_calico-system(607d6cea-c322-4995-9bb6-13328b249dcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:30.303335 kubelet[3321]: E1123 23:04:30.303256 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:04:30.468283 sshd[5859]: Connection closed by 139.178.89.65 port 60752 Nov 23 23:04:30.469574 sshd-session[5856]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:30.477732 systemd[1]: sshd@21-172.31.29.95:22-139.178.89.65:60752.service: Deactivated successfully. Nov 23 23:04:30.483257 systemd[1]: session-22.scope: Deactivated successfully. Nov 23 23:04:30.487315 systemd-logind[1979]: Session 22 logged out. Waiting for processes to exit. Nov 23 23:04:30.490696 systemd-logind[1979]: Removed session 22. Nov 23 23:04:31.018253 containerd[2006]: time="2025-11-23T23:04:31.017913956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:04:31.311683 containerd[2006]: time="2025-11-23T23:04:31.311444085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:31.314324 containerd[2006]: time="2025-11-23T23:04:31.314167869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:04:31.314324 containerd[2006]: time="2025-11-23T23:04:31.314267109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:04:31.314875 kubelet[3321]: E1123 23:04:31.314803 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:04:31.315518 kubelet[3321]: E1123 23:04:31.314888 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:04:31.315518 kubelet[3321]: E1123 23:04:31.315079 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxk77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-sjjzv_calico-system(d094bbd9-4e37-478d-88c3-aa6e7c244a7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:31.316913 kubelet[3321]: E1123 23:04:31.316852 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:04:32.020288 containerd[2006]: time="2025-11-23T23:04:32.019809501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:04:32.280687 containerd[2006]: time="2025-11-23T23:04:32.280531174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:32.282954 containerd[2006]: time="2025-11-23T23:04:32.282871510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:04:32.283068 containerd[2006]: time="2025-11-23T23:04:32.283003330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:04:32.283386 kubelet[3321]: E1123 23:04:32.283302 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:04:32.283488 kubelet[3321]: E1123 23:04:32.283385 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:04:32.283686 kubelet[3321]: E1123 23:04:32.283575 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqlth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596c4fb774-qwzhg_calico-apiserver(33a858d5-f639-4092-9d21-043beaa938d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:32.285450 kubelet[3321]: E1123 23:04:32.285388 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:04:34.016535 kubelet[3321]: E1123 23:04:34.016196 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:04:35.511551 systemd[1]: Started sshd@22-172.31.29.95:22-139.178.89.65:60764.service - OpenSSH per-connection server daemon (139.178.89.65:60764). Nov 23 23:04:35.739267 sshd[5872]: Accepted publickey for core from 139.178.89.65 port 60764 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:35.742573 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:35.754222 systemd-logind[1979]: New session 23 of user core. Nov 23 23:04:35.765483 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 23 23:04:36.059695 sshd[5875]: Connection closed by 139.178.89.65 port 60764 Nov 23 23:04:36.061448 sshd-session[5872]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:36.073816 systemd-logind[1979]: Session 23 logged out. Waiting for processes to exit. Nov 23 23:04:36.074867 systemd[1]: sshd@22-172.31.29.95:22-139.178.89.65:60764.service: Deactivated successfully. Nov 23 23:04:36.088571 systemd[1]: session-23.scope: Deactivated successfully. Nov 23 23:04:36.096441 systemd-logind[1979]: Removed session 23. Nov 23 23:04:37.015744 kubelet[3321]: E1123 23:04:37.015662 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:04:41.013934 kubelet[3321]: E1123 23:04:41.013855 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:04:41.104626 systemd[1]: Started sshd@23-172.31.29.95:22-139.178.89.65:43912.service - OpenSSH per-connection server daemon (139.178.89.65:43912). Nov 23 23:04:41.342318 sshd[5912]: Accepted publickey for core from 139.178.89.65 port 43912 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:41.345837 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:41.359633 systemd-logind[1979]: New session 24 of user core. Nov 23 23:04:41.366489 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 23 23:04:41.702526 sshd[5915]: Connection closed by 139.178.89.65 port 43912 Nov 23 23:04:41.702926 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:41.715175 systemd[1]: sshd@23-172.31.29.95:22-139.178.89.65:43912.service: Deactivated successfully. Nov 23 23:04:41.720713 systemd[1]: session-24.scope: Deactivated successfully. Nov 23 23:04:41.724247 systemd-logind[1979]: Session 24 logged out. Waiting for processes to exit. Nov 23 23:04:41.729363 systemd-logind[1979]: Removed session 24. Nov 23 23:04:43.013635 kubelet[3321]: E1123 23:04:43.013085 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:04:45.015623 kubelet[3321]: E1123 23:04:45.015481 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:04:46.017158 kubelet[3321]: E1123 23:04:46.015736 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:04:46.743898 systemd[1]: Started sshd@24-172.31.29.95:22-139.178.89.65:43928.service - OpenSSH per-connection server daemon (139.178.89.65:43928). Nov 23 23:04:46.957296 sshd[5931]: Accepted publickey for core from 139.178.89.65 port 43928 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:46.960673 sshd-session[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:46.970195 systemd-logind[1979]: New session 25 of user core. Nov 23 23:04:46.979794 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 23 23:04:47.311898 sshd[5934]: Connection closed by 139.178.89.65 port 43928 Nov 23 23:04:47.315906 sshd-session[5931]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:47.328726 systemd-logind[1979]: Session 25 logged out. Waiting for processes to exit. Nov 23 23:04:47.330262 systemd[1]: sshd@24-172.31.29.95:22-139.178.89.65:43928.service: Deactivated successfully. Nov 23 23:04:47.340959 systemd[1]: session-25.scope: Deactivated successfully. Nov 23 23:04:47.348412 systemd-logind[1979]: Removed session 25. Nov 23 23:04:48.018045 kubelet[3321]: E1123 23:04:48.017627 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:04:50.016235 kubelet[3321]: E1123 23:04:50.015570 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:04:52.017969 kubelet[3321]: E1123 23:04:52.017700 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:04:52.352565 systemd[1]: Started sshd@25-172.31.29.95:22-139.178.89.65:41850.service - OpenSSH per-connection server daemon (139.178.89.65:41850). Nov 23 23:04:52.562166 sshd[5952]: Accepted publickey for core from 139.178.89.65 port 41850 ssh2: RSA SHA256:VsI9X3Y/7PBvBIplFGxtTvzhDt4EcjbHD07saidZyqk Nov 23 23:04:52.565273 sshd-session[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:52.574098 systemd-logind[1979]: New session 26 of user core. Nov 23 23:04:52.582432 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 23 23:04:52.874165 sshd[5955]: Connection closed by 139.178.89.65 port 41850 Nov 23 23:04:52.874970 sshd-session[5952]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:52.884856 systemd[1]: sshd@25-172.31.29.95:22-139.178.89.65:41850.service: Deactivated successfully. Nov 23 23:04:52.895054 systemd[1]: session-26.scope: Deactivated successfully. Nov 23 23:04:52.898899 systemd-logind[1979]: Session 26 logged out. Waiting for processes to exit. Nov 23 23:04:52.902881 systemd-logind[1979]: Removed session 26. Nov 23 23:04:54.017154 kubelet[3321]: E1123 23:04:54.016936 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:04:57.013917 kubelet[3321]: E1123 23:04:57.013823 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:04:57.014574 kubelet[3321]: E1123 23:04:57.014049 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:05:00.017253 kubelet[3321]: E1123 23:05:00.016915 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:05:01.533595 systemd[1]: Started sshd@26-172.31.29.95:22-45.55.151.3:54631.service - OpenSSH per-connection server daemon (45.55.151.3:54631). Nov 23 23:05:03.013338 containerd[2006]: time="2025-11-23T23:05:03.013275710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:05:03.257328 containerd[2006]: time="2025-11-23T23:05:03.257166700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:03.259439 containerd[2006]: time="2025-11-23T23:05:03.259366168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:05:03.259718 containerd[2006]: time="2025-11-23T23:05:03.259501480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:05:03.260544 kubelet[3321]: E1123 23:05:03.260247 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:05:03.260544 kubelet[3321]: E1123 23:05:03.260313 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:05:03.260544 kubelet[3321]: E1123 23:05:03.260455 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f58e4ff569304c459c01f849858ad86b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cchds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bb99cb694-xrvwh_calico-system(77afc798-8fc5-43e1-9a7e-049f9b28d8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:03.263679 containerd[2006]: time="2025-11-23T23:05:03.263255140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:05:03.522193 containerd[2006]: time="2025-11-23T23:05:03.521988053Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:03.524327 containerd[2006]: time="2025-11-23T23:05:03.524248625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:05:03.524459 containerd[2006]: time="2025-11-23T23:05:03.524386397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:05:03.524627 kubelet[3321]: E1123 23:05:03.524564 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:05:03.524727 kubelet[3321]: E1123 23:05:03.524639 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:05:03.524902 kubelet[3321]: E1123 23:05:03.524831 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cchds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7bb99cb694-xrvwh_calico-system(77afc798-8fc5-43e1-9a7e-049f9b28d8f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:03.526146 kubelet[3321]: E1123 23:05:03.526066 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:05:03.579227 sshd[5967]: Connection closed by 45.55.151.3 port 54631 Nov 23 23:05:03.581552 systemd[1]: sshd@26-172.31.29.95:22-45.55.151.3:54631.service: Deactivated successfully. Nov 23 23:05:03.824499 systemd[1]: Started sshd@27-172.31.29.95:22-45.55.151.3:52475.service - OpenSSH per-connection server daemon (45.55.151.3:52475). Nov 23 23:05:04.307341 sshd[5972]: Connection closed by 45.55.151.3 port 52475 [preauth] Nov 23 23:05:04.310343 systemd[1]: sshd@27-172.31.29.95:22-45.55.151.3:52475.service: Deactivated successfully. Nov 23 23:05:06.013898 kubelet[3321]: E1123 23:05:06.013573 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-szwps" podUID="4be32920-a592-41ee-b676-15a5a370b665" Nov 23 23:05:07.057013 kubelet[3321]: E1123 23:05:07.056659 3321 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-95?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 23 23:05:07.182431 systemd[1]: cri-containerd-28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0.scope: Deactivated successfully. Nov 23 23:05:07.184358 systemd[1]: cri-containerd-28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0.scope: Consumed 5.855s CPU time, 56.4M memory peak, 128K read from disk. Nov 23 23:05:07.191887 containerd[2006]: time="2025-11-23T23:05:07.191771047Z" level=info msg="received container exit event container_id:\"28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0\" id:\"28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0\" pid:3161 exit_status:1 exited_at:{seconds:1763939107 nanos:191337295}" Nov 23 23:05:07.244745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0-rootfs.mount: Deactivated successfully. Nov 23 23:05:07.601354 systemd[1]: cri-containerd-c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a.scope: Deactivated successfully. Nov 23 23:05:07.601963 systemd[1]: cri-containerd-c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a.scope: Consumed 28.399s CPU time, 101M memory peak. Nov 23 23:05:07.607254 containerd[2006]: time="2025-11-23T23:05:07.606726933Z" level=info msg="received container exit event container_id:\"c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a\" id:\"c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a\" pid:3908 exit_status:1 exited_at:{seconds:1763939107 nanos:606071037}" Nov 23 23:05:07.654449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a-rootfs.mount: Deactivated successfully. Nov 23 23:05:07.904989 kubelet[3321]: I1123 23:05:07.904918 3321 scope.go:117] "RemoveContainer" containerID="28c877280f0680a4c632a158b14a895dab10e328204356677af0ffa718887df0" Nov 23 23:05:07.909228 containerd[2006]: time="2025-11-23T23:05:07.909047867Z" level=info msg="CreateContainer within sandbox \"feb60d196af9e828621002556b9d7e6bfbbef9b9134bfbd51477794e6cef62bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 23 23:05:07.911343 kubelet[3321]: I1123 23:05:07.911286 3321 scope.go:117] "RemoveContainer" containerID="c44f15903a59897ff4146259db4015351822589b96554558ffe2773775bbc26a" Nov 23 23:05:07.916064 containerd[2006]: time="2025-11-23T23:05:07.915800987Z" level=info msg="CreateContainer within sandbox \"fbf0d0266d86d8bea86c5fc3ad55534ef060e46e9f209b746fa7e6bb4b6e0746\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 23 23:05:07.938055 containerd[2006]: time="2025-11-23T23:05:07.936048131Z" level=info msg="Container 637ba0441424b64f13feee8828167f686d95e60dc97a318a363f44bd3e316982: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:07.938055 containerd[2006]: time="2025-11-23T23:05:07.937487255Z" level=info msg="Container 012a4aff98fd7f9a35163350ed0247a8132d4706718174c44573fd36659fb833: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:07.957107 containerd[2006]: time="2025-11-23T23:05:07.956977739Z" level=info msg="CreateContainer within sandbox \"fbf0d0266d86d8bea86c5fc3ad55534ef060e46e9f209b746fa7e6bb4b6e0746\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"012a4aff98fd7f9a35163350ed0247a8132d4706718174c44573fd36659fb833\"" Nov 23 23:05:07.960762 containerd[2006]: time="2025-11-23T23:05:07.960703811Z" level=info msg="StartContainer for \"012a4aff98fd7f9a35163350ed0247a8132d4706718174c44573fd36659fb833\"" Nov 23 23:05:07.963061 containerd[2006]: time="2025-11-23T23:05:07.962980955Z" level=info msg="connecting to shim 012a4aff98fd7f9a35163350ed0247a8132d4706718174c44573fd36659fb833" address="unix:///run/containerd/s/d27d55fa9612ae97cd3d37b5246bb24d6ab893bddc71bb363ca026ceae8126e1" protocol=ttrpc version=3 Nov 23 23:05:07.981441 containerd[2006]: time="2025-11-23T23:05:07.981033227Z" level=info msg="CreateContainer within sandbox \"feb60d196af9e828621002556b9d7e6bfbbef9b9134bfbd51477794e6cef62bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"637ba0441424b64f13feee8828167f686d95e60dc97a318a363f44bd3e316982\"" Nov 23 23:05:07.982959 containerd[2006]: time="2025-11-23T23:05:07.982360667Z" level=info msg="StartContainer for \"637ba0441424b64f13feee8828167f686d95e60dc97a318a363f44bd3e316982\"" Nov 23 23:05:07.985778 containerd[2006]: time="2025-11-23T23:05:07.985659923Z" level=info msg="connecting to shim 637ba0441424b64f13feee8828167f686d95e60dc97a318a363f44bd3e316982" address="unix:///run/containerd/s/148114e997683583565fb4f4c075438fa1ded50af1fc7c2e3a8a8dd2bf12490c" protocol=ttrpc version=3 Nov 23 23:05:08.027525 systemd[1]: Started cri-containerd-012a4aff98fd7f9a35163350ed0247a8132d4706718174c44573fd36659fb833.scope - libcontainer container 012a4aff98fd7f9a35163350ed0247a8132d4706718174c44573fd36659fb833. Nov 23 23:05:08.063427 systemd[1]: Started cri-containerd-637ba0441424b64f13feee8828167f686d95e60dc97a318a363f44bd3e316982.scope - libcontainer container 637ba0441424b64f13feee8828167f686d95e60dc97a318a363f44bd3e316982. Nov 23 23:05:08.144984 containerd[2006]: time="2025-11-23T23:05:08.144937868Z" level=info msg="StartContainer for \"012a4aff98fd7f9a35163350ed0247a8132d4706718174c44573fd36659fb833\" returns successfully" Nov 23 23:05:08.180549 containerd[2006]: time="2025-11-23T23:05:08.180361604Z" level=info msg="StartContainer for \"637ba0441424b64f13feee8828167f686d95e60dc97a318a363f44bd3e316982\" returns successfully" Nov 23 23:05:09.013438 kubelet[3321]: E1123 23:05:09.013335 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68fb77858b-7fnfw" podUID="607d6cea-c322-4995-9bb6-13328b249dcf" Nov 23 23:05:11.013908 kubelet[3321]: E1123 23:05:11.013812 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-sjjzv" podUID="d094bbd9-4e37-478d-88c3-aa6e7c244a7b" Nov 23 23:05:12.012635 kubelet[3321]: E1123 23:05:12.012571 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596c4fb774-qwzhg" podUID="33a858d5-f639-4092-9d21-043beaa938d2" Nov 23 23:05:12.207174 systemd[1]: cri-containerd-51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d.scope: Deactivated successfully. Nov 23 23:05:12.207772 systemd[1]: cri-containerd-51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d.scope: Consumed 4.475s CPU time, 21.7M memory peak, 80K read from disk. Nov 23 23:05:12.216151 containerd[2006]: time="2025-11-23T23:05:12.215407152Z" level=info msg="received container exit event container_id:\"51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d\" id:\"51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d\" pid:3168 exit_status:1 exited_at:{seconds:1763939112 nanos:214723728}" Nov 23 23:05:12.263263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d-rootfs.mount: Deactivated successfully. Nov 23 23:05:12.942760 kubelet[3321]: I1123 23:05:12.942714 3321 scope.go:117] "RemoveContainer" containerID="51d62c82b088b1d0fb4b9b21f6fd0d2cc176c49e3e140b4a4fcfea252a80447d" Nov 23 23:05:12.946656 containerd[2006]: time="2025-11-23T23:05:12.946598116Z" level=info msg="CreateContainer within sandbox \"072200553f3b548375e03b4b8c0c2b0c0c877c721213fc7710e0eeb4dfa41e9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 23 23:05:12.964478 containerd[2006]: time="2025-11-23T23:05:12.964412080Z" level=info msg="Container d0db1e2d9868253bb1802546e2b617649da14322c6b6189b5f14ba44f90fec9d: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:12.983660 containerd[2006]: time="2025-11-23T23:05:12.983577664Z" level=info msg="CreateContainer within sandbox \"072200553f3b548375e03b4b8c0c2b0c0c877c721213fc7710e0eeb4dfa41e9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d0db1e2d9868253bb1802546e2b617649da14322c6b6189b5f14ba44f90fec9d\"" Nov 23 23:05:12.984510 containerd[2006]: time="2025-11-23T23:05:12.984419896Z" level=info msg="StartContainer for \"d0db1e2d9868253bb1802546e2b617649da14322c6b6189b5f14ba44f90fec9d\"" Nov 23 23:05:12.986599 containerd[2006]: time="2025-11-23T23:05:12.986528536Z" level=info msg="connecting to shim d0db1e2d9868253bb1802546e2b617649da14322c6b6189b5f14ba44f90fec9d" address="unix:///run/containerd/s/a72935dc33271586b87f12b4a88bbebc40d4e77f8107a82b41350e0aefd55460" protocol=ttrpc version=3 Nov 23 23:05:13.013885 containerd[2006]: time="2025-11-23T23:05:13.013538688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:05:13.031420 systemd[1]: Started cri-containerd-d0db1e2d9868253bb1802546e2b617649da14322c6b6189b5f14ba44f90fec9d.scope - libcontainer container d0db1e2d9868253bb1802546e2b617649da14322c6b6189b5f14ba44f90fec9d. Nov 23 23:05:13.127322 containerd[2006]: time="2025-11-23T23:05:13.126945517Z" level=info msg="StartContainer for \"d0db1e2d9868253bb1802546e2b617649da14322c6b6189b5f14ba44f90fec9d\" returns successfully" Nov 23 23:05:13.296895 containerd[2006]: time="2025-11-23T23:05:13.296592962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:13.299213 containerd[2006]: time="2025-11-23T23:05:13.299015954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:05:13.299213 containerd[2006]: time="2025-11-23T23:05:13.299040302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:05:13.299682 kubelet[3321]: E1123 23:05:13.299607 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:05:13.299784 kubelet[3321]: E1123 23:05:13.299679 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:05:13.299947 kubelet[3321]: E1123 23:05:13.299851 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf7bd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:13.304014 containerd[2006]: time="2025-11-23T23:05:13.303911774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:05:13.590227 containerd[2006]: time="2025-11-23T23:05:13.589972143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:13.593021 containerd[2006]: time="2025-11-23T23:05:13.592870707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:05:13.593021 containerd[2006]: time="2025-11-23T23:05:13.592944339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:05:13.593602 kubelet[3321]: E1123 23:05:13.593547 3321 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:05:13.593958 kubelet[3321]: E1123 23:05:13.593717 3321 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:05:13.593958 kubelet[3321]: E1123 23:05:13.593884 3321 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf7bd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-d6qd7_calico-system(eb8960c6-f005-4ea0-b8f6-6850fa0745aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:13.595605 kubelet[3321]: E1123 23:05:13.595493 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d6qd7" podUID="eb8960c6-f005-4ea0-b8f6-6850fa0745aa" Nov 23 23:05:15.014476 kubelet[3321]: E1123 23:05:15.014400 3321 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7bb99cb694-xrvwh" podUID="77afc798-8fc5-43e1-9a7e-049f9b28d8f3" Nov 23 23:05:17.057828 kubelet[3321]: E1123 23:05:17.057625 3321 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-95?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"